lab overview - hol-1730-use-2

110
Table of Contents Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform .................... 2 Lab Guidance .......................................................................................................... 3 Module 1 - What is Photon Platform (15 minutes) ............................................................. 9 Introduction........................................................................................................... 10 What is Photon Platform - How Is It Different From vSphere? ................................ 11 Cloud Administration - Multi-Tenancy and Resource Management ........................ 13 Cloud Administration - Images and Flavors ........................................................... 24 Conclusion............................................................................................................. 29 Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes).......................................................................................................................... 31 Introduction........................................................................................................... 32 Multi-Tenancy and Resource Management in Photon Platform .............................. 33 Set Up Cloud VM Operational Elements Through Definition of Base Images, Flavors, Networks and Persistent Disks ................................................................. 41 Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts ..................................................................................................................... 53 Monitor and Troubleshoot Photon Platform ........................................................... 67 Conclusion............................................................................................................. 82 Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes) ..... 83 Introduction .......................................................................................................... 84 Container Orchestration With Kubernetes on Photon Platform.............................. 85 Container Orchestration With Docker Machine Using Rancher on Photon Platform ................................................................................................................ 96 Conclusion .......................................................................................................... 109 HOL-1730-USE-2 Page 1 HOL-1730-USE-2

Upload: trinhthu

Post on 14-Feb-2017

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lab Overview - HOL-1730-USE-2

Table of ContentsLab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform 2

Lab Guidance 3Module 1 - What is Photon Platform (15 minutes) 9

Introduction 10What is Photon Platform - How Is It Different From vSphere11Cloud Administration - Multi-Tenancy and Resource Management13Cloud Administration - Images and Flavors 24Conclusion 29

Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60minutes) 31

Introduction 32Multi-Tenancy and Resource Management in Photon Platform33Set Up Cloud VM Operational Elements Through Definition of Base ImagesFlavors Networks and Persistent Disks 41Map Persistent Disks To Docker Volumes To Enable Container Restart AcrossHosts 53Monitor and Troubleshoot Photon Platform 67Conclusion 82

Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes) 83Introduction 84Container Orchestration With Kubernetes on Photon Platform85Container Orchestration With Docker Machine Using Rancher on PhotonPlatform 96Conclusion 109

HOL-1730-USE-2

Page 1HOL-1730-USE-2

Lab Overview -HOL-1730-USE-2 - CloudNative Apps With Photon

Platform

HOL-1730-USE-2

Page 2HOL-1730-USE-2

Lab GuidanceNote It will take more than 90 minutes to complete this lab You shouldexpect to only finish 2-3 of the modules during your time The modules areindependent of each other so you can start at the beginning of any moduleand proceed from there You can use the Table of Contents to access anymodule of your choosing

The Table of Contents can be accessed in the upper right-hand corner of theLab Manual

Photon Platform is a distributed multi-tenant host controller optimized for containersThe Photon Platform delivers

bull API-first Model A user-experience focused on the automation of infrastructureconsumption and operations using simple RESTful APIs SDKs and CLI tooling allfully multi-tenant Allows a small automaton-savvy devops team to efficientlyleverage fleets of servers

bull Fast Scale-out Control Plane A built-from-scratch infrastructure control planeoptimized for massive scale and speed allowing the creation of 1000s of new VM-isolated workloads per minute and supporting 100000s of total simultaneousworkloads

bull Native Container Support Developer teams consuming infrastructure get theirchoice of open container orchestration frameworks (eg Kubernetes DockerSwarm Pivotal CF Lattice and Mesos) The Photon controller is built for largeenvironments to run workloads designed for cloud-native (distributed) appsExamples include a modern scale-out SaaSmobile-backend apps highly dynamiccontinuous integration or simulation environments sizable data analytics clusters(eg HadoopSpark) or large-scale platform-as-a-service deployments (egCloud Foundry)

The objective of this lab is to provide an introduction to Photon Platform constructs andarchitecture then deep dive into how to consume Infrastructure as a Service (IaaS)using this platform Finally the user will learn how to deploy OpenSource frameworksand applications onto Photon Platform using standard deployment methods for theframeworks

Lab Module List

bull Module 1 - What is Photon Platform (15 minutes) (Basic) Walk throughcontrol plane mgmt layout Intro to images flavors tenants resource poolsprojects Mostly viewing an existing setup

bull Module 2 - Photon Platform IaaS Deep Dive(45 minutes) (Advanced) Fromthe start create tenant resource ticketproject image flavors vm persistentdisk network mgmt UI attachdetach Review troubleshooting through logs

HOL-1730-USE-2

Page 3HOL-1730-USE-2

bull Module 3 - Container Frameworks with Photon Platform(30 minutes)(Advanced) Create clusters kubernetes Docker Machine with standardopensource methods and deploy apps on each

Lab Captains

bull Module 1 - Michael West Technical Architect Cloud Native ApplicationsUSA

bull Module 2 - Randy Carson Senior Systems Engineer USA

This lab manual can be downloaded from the Hands-on Labs Document site found here

[httpdocsholpubHOL-2017]

This lab may be available in other languages To set your language preference and havea localized manual deployed with your lab you may utilize this document to help guideyou through the process

httpdocsholvmwarecomannouncementsnee-default-languagepdf

HOL-1730-USE-2

Page 4HOL-1730-USE-2

Location of the Main Console

1 The area in the RED box contains the Main Console The Lab Manual is on the tabto the Right of the Main Console

2 A particular lab may have additional consoles found on separate tabs in the upperleft You will be directed to open another specific console if needed

3 Your lab starts with 90 minutes on the timer The lab can not be saved All yourwork must be done during the lab session But you can click the EXTEND toincrease your time If you are at a VMware event you can extend your lab timetwice for up to 30 minutes Each click gives you an additional 15 minutesOutside of VMware events you can extend your lab time up to 9 hours and 30

minutes Each click gives you an additional hour

Activation Prompt or Watermark

When you first start your lab you may notice a watermark on the desktop indicatingthat Windows is not activated

One of the major benefits of virtualization is that virtual machines can be moved andrun on any platform The Hands-on Labs utilizes this benefit and we are able to run thelabs out of multiple datacenters However these datacenters may not have identicalprocessors which triggers a Microsoft activation check through the Internet

Rest assured VMware and the Hands-on Labs are in full compliance with Microsoftlicensing requirements The lab that you are using is a self-contained pod and does nothave full access to the Internet which is required for Windows to verify the activation

HOL-1730-USE-2

Page 5HOL-1730-USE-2

Without full access to the Internet this automated process fails and you see thiswatermark

This cosmetic issue has no effect on your lab

Alternate Methods of Keyboard Data Entry

During this module you will input text into the Main Console Besides directly typing itin there are two very helpful methods of entering data which make it easier to entercomplex data

Click and Drag Lab Manual Content Into Console ActiveWindow

You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console

Accessing the Online International Keyboard

You can also use the Online International Keyboard found in the Main Console

ltdiv class=player-unavailablegtlth1 class=messagegtAn error occurredlth1gtltdiv class=submessagegtltahref=httpwwwyoutubecomwatchv=xS07n6GzGuo target=_blankgtTry watching this video on wwwyoutubecomltagt or enableJavaScript if it is disabled in your browserltdivgtltdivgt

HOL-1730-USE-2

Page 6HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 2: Lab Overview - HOL-1730-USE-2

Lab Overview -HOL-1730-USE-2 - CloudNative Apps With Photon

Platform

HOL-1730-USE-2

Page 2HOL-1730-USE-2

Lab GuidanceNote It will take more than 90 minutes to complete this lab You shouldexpect to only finish 2-3 of the modules during your time The modules areindependent of each other so you can start at the beginning of any moduleand proceed from there You can use the Table of Contents to access anymodule of your choosing

The Table of Contents can be accessed in the upper right-hand corner of theLab Manual

Photon Platform is a distributed multi-tenant host controller optimized for containersThe Photon Platform delivers

bull API-first Model A user-experience focused on the automation of infrastructureconsumption and operations using simple RESTful APIs SDKs and CLI tooling allfully multi-tenant Allows a small automaton-savvy devops team to efficientlyleverage fleets of servers

bull Fast Scale-out Control Plane A built-from-scratch infrastructure control planeoptimized for massive scale and speed allowing the creation of 1000s of new VM-isolated workloads per minute and supporting 100000s of total simultaneousworkloads

bull Native Container Support Developer teams consuming infrastructure get theirchoice of open container orchestration frameworks (eg Kubernetes DockerSwarm Pivotal CF Lattice and Mesos) The Photon controller is built for largeenvironments to run workloads designed for cloud-native (distributed) appsExamples include a modern scale-out SaaSmobile-backend apps highly dynamiccontinuous integration or simulation environments sizable data analytics clusters(eg HadoopSpark) or large-scale platform-as-a-service deployments (egCloud Foundry)

The objective of this lab is to provide an introduction to Photon Platform constructs andarchitecture then deep dive into how to consume Infrastructure as a Service (IaaS)using this platform Finally the user will learn how to deploy OpenSource frameworksand applications onto Photon Platform using standard deployment methods for theframeworks

Lab Module List

bull Module 1 - What is Photon Platform (15 minutes) (Basic) Walk throughcontrol plane mgmt layout Intro to images flavors tenants resource poolsprojects Mostly viewing an existing setup

bull Module 2 - Photon Platform IaaS Deep Dive(45 minutes) (Advanced) Fromthe start create tenant resource ticketproject image flavors vm persistentdisk network mgmt UI attachdetach Review troubleshooting through logs

HOL-1730-USE-2

Page 3HOL-1730-USE-2

bull Module 3 - Container Frameworks with Photon Platform(30 minutes)(Advanced) Create clusters kubernetes Docker Machine with standardopensource methods and deploy apps on each

Lab Captains

bull Module 1 - Michael West Technical Architect Cloud Native ApplicationsUSA

bull Module 2 - Randy Carson Senior Systems Engineer USA

This lab manual can be downloaded from the Hands-on Labs Document site found here

[httpdocsholpubHOL-2017]

This lab may be available in other languages To set your language preference and havea localized manual deployed with your lab you may utilize this document to help guideyou through the process

httpdocsholvmwarecomannouncementsnee-default-languagepdf

HOL-1730-USE-2

Page 4HOL-1730-USE-2

Location of the Main Console

1 The area in the RED box contains the Main Console The Lab Manual is on the tabto the Right of the Main Console

2 A particular lab may have additional consoles found on separate tabs in the upperleft You will be directed to open another specific console if needed

3 Your lab starts with 90 minutes on the timer The lab can not be saved All yourwork must be done during the lab session But you can click the EXTEND toincrease your time If you are at a VMware event you can extend your lab timetwice for up to 30 minutes Each click gives you an additional 15 minutesOutside of VMware events you can extend your lab time up to 9 hours and 30

minutes Each click gives you an additional hour

Activation Prompt or Watermark

When you first start your lab you may notice a watermark on the desktop indicatingthat Windows is not activated

One of the major benefits of virtualization is that virtual machines can be moved andrun on any platform The Hands-on Labs utilizes this benefit and we are able to run thelabs out of multiple datacenters However these datacenters may not have identicalprocessors which triggers a Microsoft activation check through the Internet

Rest assured VMware and the Hands-on Labs are in full compliance with Microsoftlicensing requirements The lab that you are using is a self-contained pod and does nothave full access to the Internet which is required for Windows to verify the activation

HOL-1730-USE-2

Page 5HOL-1730-USE-2

Without full access to the Internet this automated process fails and you see thiswatermark

This cosmetic issue has no effect on your lab

Alternate Methods of Keyboard Data Entry

During this module you will input text into the Main Console Besides directly typing itin there are two very helpful methods of entering data which make it easier to entercomplex data

Click and Drag Lab Manual Content Into Console ActiveWindow

You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console

Accessing the Online International Keyboard

You can also use the Online International Keyboard found in the Main Console

ltdiv class=player-unavailablegtlth1 class=messagegtAn error occurredlth1gtltdiv class=submessagegtltahref=httpwwwyoutubecomwatchv=xS07n6GzGuo target=_blankgtTry watching this video on wwwyoutubecomltagt or enableJavaScript if it is disabled in your browserltdivgtltdivgt

HOL-1730-USE-2

Page 6HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 3: Lab Overview - HOL-1730-USE-2

Lab GuidanceNote It will take more than 90 minutes to complete this lab You shouldexpect to only finish 2-3 of the modules during your time The modules areindependent of each other so you can start at the beginning of any moduleand proceed from there You can use the Table of Contents to access anymodule of your choosing

The Table of Contents can be accessed in the upper right-hand corner of theLab Manual

Photon Platform is a distributed multi-tenant host controller optimized for containersThe Photon Platform delivers

bull API-first Model A user-experience focused on the automation of infrastructureconsumption and operations using simple RESTful APIs SDKs and CLI tooling allfully multi-tenant Allows a small automaton-savvy devops team to efficientlyleverage fleets of servers

bull Fast Scale-out Control Plane A built-from-scratch infrastructure control planeoptimized for massive scale and speed allowing the creation of 1000s of new VM-isolated workloads per minute and supporting 100000s of total simultaneousworkloads

bull Native Container Support Developer teams consuming infrastructure get theirchoice of open container orchestration frameworks (eg Kubernetes DockerSwarm Pivotal CF Lattice and Mesos) The Photon controller is built for largeenvironments to run workloads designed for cloud-native (distributed) appsExamples include a modern scale-out SaaSmobile-backend apps highly dynamiccontinuous integration or simulation environments sizable data analytics clusters(eg HadoopSpark) or large-scale platform-as-a-service deployments (egCloud Foundry)

The objective of this lab is to provide an introduction to Photon Platform constructs andarchitecture then deep dive into how to consume Infrastructure as a Service (IaaS)using this platform Finally the user will learn how to deploy OpenSource frameworksand applications onto Photon Platform using standard deployment methods for theframeworks

Lab Module List

bull Module 1 - What is Photon Platform (15 minutes) (Basic) Walk throughcontrol plane mgmt layout Intro to images flavors tenants resource poolsprojects Mostly viewing an existing setup

bull Module 2 - Photon Platform IaaS Deep Dive(45 minutes) (Advanced) Fromthe start create tenant resource ticketproject image flavors vm persistentdisk network mgmt UI attachdetach Review troubleshooting through logs

HOL-1730-USE-2

Page 3HOL-1730-USE-2

bull Module 3 - Container Frameworks with Photon Platform(30 minutes)(Advanced) Create clusters kubernetes Docker Machine with standardopensource methods and deploy apps on each

Lab Captains

bull Module 1 - Michael West Technical Architect Cloud Native ApplicationsUSA

bull Module 2 - Randy Carson Senior Systems Engineer USA

This lab manual can be downloaded from the Hands-on Labs Document site found here

[httpdocsholpubHOL-2017]

This lab may be available in other languages To set your language preference and havea localized manual deployed with your lab you may utilize this document to help guideyou through the process

httpdocsholvmwarecomannouncementsnee-default-languagepdf

HOL-1730-USE-2

Page 4HOL-1730-USE-2

Location of the Main Console

1 The area in the RED box contains the Main Console The Lab Manual is on the tabto the Right of the Main Console

2 A particular lab may have additional consoles found on separate tabs in the upperleft You will be directed to open another specific console if needed

3 Your lab starts with 90 minutes on the timer The lab can not be saved All yourwork must be done during the lab session But you can click the EXTEND toincrease your time If you are at a VMware event you can extend your lab timetwice for up to 30 minutes Each click gives you an additional 15 minutesOutside of VMware events you can extend your lab time up to 9 hours and 30

minutes Each click gives you an additional hour

Activation Prompt or Watermark

When you first start your lab you may notice a watermark on the desktop indicatingthat Windows is not activated

One of the major benefits of virtualization is that virtual machines can be moved andrun on any platform The Hands-on Labs utilizes this benefit and we are able to run thelabs out of multiple datacenters However these datacenters may not have identicalprocessors which triggers a Microsoft activation check through the Internet

Rest assured VMware and the Hands-on Labs are in full compliance with Microsoftlicensing requirements The lab that you are using is a self-contained pod and does nothave full access to the Internet which is required for Windows to verify the activation

HOL-1730-USE-2

Page 5HOL-1730-USE-2

Without full access to the Internet this automated process fails and you see thiswatermark

This cosmetic issue has no effect on your lab

Alternate Methods of Keyboard Data Entry

During this module you will input text into the Main Console Besides directly typing itin there are two very helpful methods of entering data which make it easier to entercomplex data

Click and Drag Lab Manual Content Into Console ActiveWindow

You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console

Accessing the Online International Keyboard

You can also use the Online International Keyboard found in the Main Console

ltdiv class=player-unavailablegtlth1 class=messagegtAn error occurredlth1gtltdiv class=submessagegtltahref=httpwwwyoutubecomwatchv=xS07n6GzGuo target=_blankgtTry watching this video on wwwyoutubecomltagt or enableJavaScript if it is disabled in your browserltdivgtltdivgt

HOL-1730-USE-2

Page 6HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 4: Lab Overview - HOL-1730-USE-2

bull Module 3 - Container Frameworks with Photon Platform(30 minutes)(Advanced) Create clusters kubernetes Docker Machine with standardopensource methods and deploy apps on each

Lab Captains

bull Module 1 - Michael West Technical Architect Cloud Native ApplicationsUSA

bull Module 2 - Randy Carson Senior Systems Engineer USA

This lab manual can be downloaded from the Hands-on Labs Document site found here

[httpdocsholpubHOL-2017]

This lab may be available in other languages To set your language preference and havea localized manual deployed with your lab you may utilize this document to help guideyou through the process

httpdocsholvmwarecomannouncementsnee-default-languagepdf

HOL-1730-USE-2

Page 4HOL-1730-USE-2

Location of the Main Console

1 The area in the RED box contains the Main Console The Lab Manual is on the tabto the Right of the Main Console

2 A particular lab may have additional consoles found on separate tabs in the upperleft You will be directed to open another specific console if needed

3 Your lab starts with 90 minutes on the timer The lab can not be saved All yourwork must be done during the lab session But you can click the EXTEND toincrease your time If you are at a VMware event you can extend your lab timetwice for up to 30 minutes Each click gives you an additional 15 minutesOutside of VMware events you can extend your lab time up to 9 hours and 30

minutes Each click gives you an additional hour

Activation Prompt or Watermark

When you first start your lab you may notice a watermark on the desktop indicatingthat Windows is not activated

One of the major benefits of virtualization is that virtual machines can be moved andrun on any platform The Hands-on Labs utilizes this benefit and we are able to run thelabs out of multiple datacenters However these datacenters may not have identicalprocessors which triggers a Microsoft activation check through the Internet

Rest assured VMware and the Hands-on Labs are in full compliance with Microsoftlicensing requirements The lab that you are using is a self-contained pod and does nothave full access to the Internet which is required for Windows to verify the activation

HOL-1730-USE-2

Page 5HOL-1730-USE-2

Without full access to the Internet this automated process fails and you see thiswatermark

This cosmetic issue has no effect on your lab

Alternate Methods of Keyboard Data Entry

During this module you will input text into the Main Console Besides directly typing itin there are two very helpful methods of entering data which make it easier to entercomplex data

Click and Drag Lab Manual Content Into Console ActiveWindow

You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console

Accessing the Online International Keyboard

You can also use the Online International Keyboard found in the Main Console

ltdiv class=player-unavailablegtlth1 class=messagegtAn error occurredlth1gtltdiv class=submessagegtltahref=httpwwwyoutubecomwatchv=xS07n6GzGuo target=_blankgtTry watching this video on wwwyoutubecomltagt or enableJavaScript if it is disabled in your browserltdivgtltdivgt

HOL-1730-USE-2

Page 6HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 5: Lab Overview - HOL-1730-USE-2

Location of the Main Console

1 The area in the RED box contains the Main Console The Lab Manual is on the tabto the Right of the Main Console

2 A particular lab may have additional consoles found on separate tabs in the upperleft You will be directed to open another specific console if needed

3 Your lab starts with 90 minutes on the timer The lab can not be saved All yourwork must be done during the lab session But you can click the EXTEND toincrease your time If you are at a VMware event you can extend your lab timetwice for up to 30 minutes Each click gives you an additional 15 minutesOutside of VMware events you can extend your lab time up to 9 hours and 30

minutes Each click gives you an additional hour

Activation Prompt or Watermark

When you first start your lab you may notice a watermark on the desktop indicatingthat Windows is not activated

One of the major benefits of virtualization is that virtual machines can be moved andrun on any platform The Hands-on Labs utilizes this benefit and we are able to run thelabs out of multiple datacenters However these datacenters may not have identicalprocessors which triggers a Microsoft activation check through the Internet

Rest assured VMware and the Hands-on Labs are in full compliance with Microsoftlicensing requirements The lab that you are using is a self-contained pod and does nothave full access to the Internet which is required for Windows to verify the activation

HOL-1730-USE-2

Page 5HOL-1730-USE-2

Without full access to the Internet this automated process fails and you see thiswatermark

This cosmetic issue has no effect on your lab

Alternate Methods of Keyboard Data Entry

During this module you will input text into the Main Console Besides directly typing itin there are two very helpful methods of entering data which make it easier to entercomplex data

Click and Drag Lab Manual Content Into Console ActiveWindow

You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console

Accessing the Online International Keyboard

You can also use the Online International Keyboard found in the Main Console

ltdiv class=player-unavailablegtlth1 class=messagegtAn error occurredlth1gtltdiv class=submessagegtltahref=httpwwwyoutubecomwatchv=xS07n6GzGuo target=_blankgtTry watching this video on wwwyoutubecomltagt or enableJavaScript if it is disabled in your browserltdivgtltdivgt

HOL-1730-USE-2

Page 6HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 6: Lab Overview - HOL-1730-USE-2

Without full access to the Internet this automated process fails and you see thiswatermark

This cosmetic issue has no effect on your lab

Alternate Methods of Keyboard Data Entry

During this module you will input text into the Main Console Besides directly typing itin there are two very helpful methods of entering data which make it easier to entercomplex data

Click and Drag Lab Manual Content Into Console ActiveWindow

You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console

Accessing the Online International Keyboard

You can also use the Online International Keyboard found in the Main Console

ltdiv class=player-unavailablegtlth1 class=messagegtAn error occurredlth1gtltdiv class=submessagegtltahref=httpwwwyoutubecomwatchv=xS07n6GzGuo target=_blankgtTry watching this video on wwwyoutubecomltagt or enableJavaScript if it is disabled in your browserltdivgtltdivgt

HOL-1730-USE-2

Page 6HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 7: Lab Overview - HOL-1730-USE-2

1 Click on the Keyboard Icon found on the Windows Quick Launch Task Bar

Click once in active console window

In this example you will use the Online Keyboard to enter the sign used in emailaddresses The sign is Shift-2 on US keyboard layouts

1 Click once in the active console window2 Click on the Shift key

Click on the key

1 Click on the key

Notice the sign entered in the active console window

HOL-1730-USE-2

Page 7HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 8: Lab Overview - HOL-1730-USE-2

Look at the lower right portion of the screen

Please check to see that your lab is finished all the startup routines and is ready for youto start If you see anything other than Ready please wait a few minutes If after 5minutes you lab has not changed to Ready please ask for assistance

HOL-1730-USE-2

Page 8HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 9: Lab Overview - HOL-1730-USE-2

Module 1 - What isPhoton Platform (15

minutes)

HOL-1730-USE-2

Page 9HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 10: Lab Overview - HOL-1730-USE-2

IntroductionThis module will introduce you to the new operational model for cloud native apps Youwill walk through the Photon Platform control plane management architecture and willget a guided introduction to image management resource management and multi-tenancy You will use a combination of the Management UI and CLI to become familiarwith Photon Platform For a detailed dive into platform proceed to Module 2 - CloudAdmin Operations

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors

HOL-1730-USE-2

Page 10HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 11: Lab Overview - HOL-1730-USE-2

What is Photon Platform - How Is ItDifferent From vSphereThe VMware Photon Platform is a new infrastructure stack optimized for cloud-nativeapplications It consists of Photon Machine and the Photon Controller a distributed API-driven multi-tenant control plane that is designed for extremely high scale and churn

Photon Platform has been open sourced so we could engage directly with developerscustomers and partners If you are a developer interested in forking and building thecode or just want to try it out go to vmwaregithubcom

Photon Platform differs from vSphere in that it has been architected from the ground upto provide consumption of infrastructure through programmatic methods Though weprovide a Management UI the primary consumption model for DevOps will be throughthe Rest API directly or the CLI built on top of it

The platform has a native multi-tenancy model that allows the admin to abstract andpool physical resources and allocate them into multiple Tenant and Project tiers Baseimages used for VM and Disk creation are centrally managed and workload placement isoptimized through the use of Linked Clone (Copy On Write) technology

The Control plane itself is architected as a highly available redundant set of servicesthat facilitates large numbers of simultaneous placement requests and prevents loss ofservice

Photon Platform is not a replacement for vCenter It is designed for a specific class ofapplications that require support for the services described above It is not featurecompatible with vCenter and does not implement things like vMotion HA FT - whichare either not a requirement for Cloud Native Applications or are generallyimplemented by the application framework itself

The High Level architecture of the Photon Controller is as shown on the next page

HOL-1730-USE-2

Page 11HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 12: Lab Overview - HOL-1730-USE-2

Photon Platform Overview - High Level Architecture(Developer Frameworks Represent a Roadmap Not all areimplemented in the Pre-GA Release)

HOL-1730-USE-2

Page 12HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 13: Lab Overview - HOL-1730-USE-2

Cloud Administration - Multi-Tenancyand Resource ManagementAdministration at cloud scale requires new paradigms Bespoke VMs nurtured throughmonths or years are not the norm Transient workloads that may live for hours or evenminutes are the order of the day DevOps processes that create continuous integrationpipelines need programatic access to infrastructure and resource allocation models thatare dynamic Multi-tenant - and do not require manual admin intervention PhotonPlatform implements a hierarchical tenant model Tenants represent a segmentationbetween companies business units or teams Cloud resources are allocated to Tenantsusing a set of Resource Tickets Allocated resources can be further carved up intoindividual projects within the Tenant Lets dive in and explore Multi-tenancy andresource management in Photon Platform

Connect To Photon Platform Management UI

1 From the Windows Desktop Launch a Chrome or Firefox Web Browser

HOL-1730-USE-2

Page 13HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 14: Lab Overview - HOL-1730-USE-2

Photon Controller Management UI

1 Select the Photon Controller Management Bookmark from the Toolbar or enterhttp19216812010 in the browser

HOL-1730-USE-2

Page 14HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 15: Lab Overview - HOL-1730-USE-2

The Control Plane Resources

The Photon Platform environment contains Management Resources and CloudResources Resources designated as Management are used for Control Plane VMsResources designated as Cloud are used for Tenants that will be running applications on

the cloud In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and wehave designated that all of the resources can be used as Management and Cloud In aProduction Cloud you would tend to separate them Our management Plane also onlyconsists of a single node Again in a production cloud you can scale this outsignificantly to provide multiple API endpoints for consuming the infrastructure and toprovide high availability

1 Click on Management

Note1 We are seeing some race conditions in our lab startup If you see no Host orDatastore data in this screen you will need to restart the Photon ControllerManagement VM Details are in the next step

Note2 If the browser does not show the management panel on the left then change theZoom to 75 Click on the 3-bar icon on the upper right and find the Zoom

Execute This Step Only If You Had No Host or DatastoreData In The Previous Screen

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open - You are now in the PhotonControllerCLI VM

HOL-1730-USE-2

Page 15HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 16: Lab Overview - HOL-1730-USE-2

4 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

5 You must change to the root user Execute su Password is vmware6 Reboot the VM Execute reboot This should take about 2 minutes to complete

HOL-1730-USE-2

Page 16HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 17: Lab Overview - HOL-1730-USE-2

Control Plane Services

The Photon Platform Control Plane runs as a set of Java Services deployed in DockerContainers that are running in a MGMT VM Each MGMT VM will run a copy of theseservices and all meta-data is automatically synced between the Cloud_Store servicerunning in each VM to provide Availability

1 Click on Cloud

HOL-1730-USE-2

Page 17HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 18: Lab Overview - HOL-1730-USE-2

Cloud Resources

This screen shows the resources that have been allocated for use by applicationsrunning on this cloud

1 Two hosts have been allocated as available to place application workloads2 One Tenant has been created (We will drill further into this in a minute)3 We have set no resource limit on vCPU or Storage but we have created a

Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB toindividual projects ( You will see the details in a minute)

HOL-1730-USE-2

Page 18HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 19: Lab Overview - HOL-1730-USE-2

Tenants

1 Click on Tenants

HOL-1730-USE-2

Page 19HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 20: Lab Overview - HOL-1730-USE-2

Our Kubernetes Tenant

We have created a Single Tenant that has been used to create a Kubernetes Cluster (Youwill use this in Module 3) You can see that a limit has been placed on Memory resourcefor this tenant and 100 of that resource has been allocated to Projects within theTenant

1 Click on Kube-Tenant

Kube-Tenant Detail

You can see a little more detail on what has been allocated to the tenant The UserInterface is still a prototype We will use the CLI in module 2 to drill into how theseresources are really allocated

Notice that the Project within the Kube-Tenant is using only 1 of the total Memoryallocated to it You may have to scroll to the bottom of the screen to see this

1 Click on Kube-Project

HOL-1730-USE-2

Page 20HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 21: Lab Overview - HOL-1730-USE-2

Kube-Project Detail

At the project detail level we can see the actual consumption of allocated resources andthe VMs that have been placed into these allocations We have deployed a KubernetesCluster which contains a Master and 2 worker node VMs You will immediately noticethat this model is about allocating large pools and managing consumption rather thanproviding a mechanism for management of individual VMs (Note These VMs will beused in Module 3 If you delete them you will have to restart the lab environment inorder to take that module

HOL-1730-USE-2

Page 21HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 22: Lab Overview - HOL-1730-USE-2

Kube Tenant Resource-Ticket

Remember that resource limits are created for a Tenant by providing the Tenant withone or more Resource-Tickets Each Resource Ticket can be carved up into individualprojects Lets add a Resource-Ticket to Kube-Tenant

1 Click on Kube-Tenant and Scroll the screen to the bottom

HOL-1730-USE-2

Page 22HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 23: Lab Overview - HOL-1730-USE-2

Create Resource-Ticket

1 Click on Resource Ticket2 Click on the + sign3 Enter Resource Ticket Name (No Spaces in the Name)4 Enter numeric values for each field5 Click OK6 Optionally Click on Projects and follow the Tenant Create steps to Create a New

project to allocate the Resource Ticket to

You have now made additional resource available to Kube Tenant and can allocate it to anew Project Check the Tenant Details page to see the updated totals You can create anew project if you want but we will not be using it in the other modules To do thatclick on Projects

HOL-1730-USE-2

Page 23HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 24: Lab Overview - HOL-1730-USE-2

Cloud Administration - Images andFlavorsContinuing on the theme from the previous lesson Cloud automation requiresabstractions for consumption of allocated resources as well as centralized managementof images used for VM and Disk creation In this lesson you will see how Images andFlavors are used as part of the operational model to create Cloud workloads

Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create both VMs and disks within theenvironment Users can upload either an OVA or VMDK file Once a VM is deployedand potentially modified its disk can be saved as an image in the shared imagerepository The image repository is a set of Datastores defined by the AdministratorDatastores can be local or shared storage When a user creates a VM or disk a linked

clone is created from the base image to provide the new object This copy on writetechnology means that the new disk takes up very little space and captures only thedisk changes from the original image Users can optimize the images for performance orstorage efficiency by specifying whether the image should be copied to Clouddatastores immediately on upload or only when a placement request is executed Thisis referred to as an EAGER or ON_DEMAND image in Photon Platform

1 Click on the gear in the upper right of the screen and then Images

Kube-Image

You notice that we have a few images in our system The Photon-management image isthe image that was used to create the Control Plane management VMs mentioned in the

HOL-1730-USE-2

Page 24HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 25: Lab Overview - HOL-1730-USE-2

earlier steps and the kube image that was used for the Kubernetes Cluster VMs you alsosaw earlier You will use the PhotonOS and Ubuntu images in a later module

1 Click the X to close the panel

Flavors

1 Click on the gear again and then Click Flavors

When you are done close the images panel so that you can see the gear icon again

Kube-Flavor

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM Persistent disks can be created independent from anyVM and then subsequently attacheddetached A VM can be created a persistent diskattached then if the VM dies the disk could be attached to another VM Flavors definethe size of the VMs (CPU and RAM) but also define the characteristics of the storage

HOL-1730-USE-2

Page 25HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 26: Lab Overview - HOL-1730-USE-2

that will be used for ephemeral (Boot) disks and persistent storage volumes You willspecify the vm and disk flavors as part of the VM or Disk creation command

1 In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor willcreate a larger VM than the other Flavors

2 Click on Ephemeral Disks

HOL-1730-USE-2

Page 26HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 27: Lab Overview - HOL-1730-USE-2

Ephemeral Disk Flavors

Notice That we have Four Ephemeral Disk Flavors in our environment We havent donemuch with them here but there are two primary use cases for Disk flavors The first isto associate a Cost with the storage you are deploying in order to facilitate Chargebackor Showback The second use case is Storage Profiles Datastores can be taggedbased on whatever criteria may be needed (AvailabilityPerformanceCostLocalSharedetc) and the flavor can specify that tag The tag will become part of the schedulingconstraints when Photon Platform attempts to place a disk Persistent disks work thesame way Though we havent yet created a persistent disk we will do so in module 2

HOL-1730-USE-2

Page 27HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 28: Lab Overview - HOL-1730-USE-2

Persistent Disk Flavors

1 Click on Persistent Disks

We have a single persistent disk flavors for you It is used in our Kubernetes ClusterYou will create another Flavor when you create persistent disks in Module 2

HOL-1730-USE-2

Page 28HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 29: Lab Overview - HOL-1730-USE-2

ConclusionCloud Scale administration requires a different way of operating Administrators do nothave the luxury of meticulously caring for individuals VMs There are just too many ofthem and they tend to have short lifetimes Administration is about thinking at scale -abstracting huge amounts of physical resources pooling them together and thenallocating parts of the pools to entities that consume them through programmaticinterfaces

You now have a basic understanding of what Photon Platform is - and how it is differentfrom vSphere You have seen that the operational model for administrators is verydifferent from what you might be used to with UI driven management through vCenterYou have been introduced to Multi-Tenancy and a new paradigm for resource allocation

through Resource Tickets as well as a different consumption model using Images andFlavors

In Module 2 you will deep dive into the Infrastructure As A Service components ofPhoton Platform

Youve finished Module 1

Congratulations on completing Module 1

If you are looking for additional information on Photon Platform

bull Use your smart device to scan the QRC Code

Proceed to any module below which interests you most [Add any customoptionalinformation for your lab manual]

bull Module 2 - Cloud Admin Operations With Photon Platform (IAAS DeepDive) (60 minutes) (Advanced)

bull Module 3 - Container Orchestration Frameworks With PhotonPlatform(45 minutes) (Advanced)

HOL-1730-USE-2

Page 29HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 30: Lab Overview - HOL-1730-USE-2

How to End Lab

To end your lab click on the END button

HOL-1730-USE-2

Page 30HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 31: Lab Overview - HOL-1730-USE-2

Module 2 - Cloud AdminOperations With Photon

Platform - IaaS Deep Dive(60 minutes)

HOL-1730-USE-2

Page 31HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 32: Lab Overview - HOL-1730-USE-2

IntroductionThis module will engage you in the Cloud Native operational model by setting up theenvironment and deploying a container application through Photon Platform API Youwill learn how to define tenant resources create images flavors vms and networksYou will also be introduced to persistent disks which are independent of your VM

lifecycle and extend Docker volumes to multiple hosts You will use both the CLI andmanagement UI in performing these tasks Finally you will build an application with(nginx) to display a web page with port mapping to show some basic networkingcapabilities Basic troubleshooting and Monitoring through LogInsight and Grafana willalso be performed

1) Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

2) Set up Cloud VM operational elements through definition of base images flavorsnetworks and disks

Photon Platform includes centralized management of base images used for VM and Diskcreation You will be introduced to managing those images VM and disk profiles areabstracted through a concept called Flavors You will see how to define those flavors aswell as use them to create VMs and Persistent disks You will create a network andcombine it with a Flavor and Image to create a VM (Note ESXi Standard networking isused in this lab however NSX support is also available)

3) Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will create a Persistent disk and see that it can beattached to a VM then detached and reattached to a second VM You will combine thiswith Docker Volumes to allow container data to persist across hosts

4) Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and GraphiteGrafana simplifyTroubleshooting and Monitoring of applications across distributed infrastructure

HOL-1730-USE-2

Page 32HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 33: Lab Overview - HOL-1730-USE-2

Multi-Tenancy and ResourceManagement in Photon PlatformYou will use the Photon Platform CLI to create tenants allocate resources (CPU Memorystorage) through the use of Resource Tickets and carve those resources into individualprojects This lesson will also provide you with a basic overview of working with the CLI

Login To CLI VM

Photon Platform CLI is available for MAC Linux and Windows For this lab the CLI isinstalled in a Linux VM

From the Windows Desktop

1 Click on the Putty Icon2 Select PhotonControllerCLI connection3 Click Open

Authentication should be done through SSH keys however if you are prompted for apassword use vmware

HOL-1730-USE-2

Page 33HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 34: Lab Overview - HOL-1730-USE-2

Verify Photon CLI Target

The Photon Platform CLI can be used to manage many instances of the Control Plane soyou must point it to the API Endpoint for the Control Plane you want to use

1 Execute the following command

photon target show

It should point to the endpoint referenced in the image If it does not then execute

photon target set http192168120109000

Note If you are seeing strange HTTP 500 errors when executing photon CLIcommands then execute the next step We are sometimes seeing race conditions onstartup of the labs that require a reboot of the Photon Controller services

HOL-1730-USE-2

Page 34HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 35: Lab Overview - HOL-1730-USE-2

Execute This Step Only If You Had photon HTTP Errors InThe Previous Step

1 ssh into the PhotonController Management VM Execute sshesxcloud19216812010 Password is vmware

2 You must change to the root user Execute su Password is vmware3 Reboot the VM Execute reboot This should take about 2 minutes to complete4 Now return to the previous step that caused the HTTP 500 error and try it again

HOL-1730-USE-2

Page 35HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 36: Lab Overview - HOL-1730-USE-2

Photon CLI Overview

The Photon CLI has a straightforward syntax It is the keyword photon followed by thetype of object you want to work on (vm disk tenant project etc) and then a list ofarguments We will be using this CLI extensively in the module Context sensitive helpis available by appending -h or --help onto any command

1 Execute

photon -h

Note If you experience problems with keyboard input not showing up in the Puttysession this is probably because the Taskbar is blocking the Command prompt

Type Clear and hit Return to move the prompt to the top of the screen

Photon CLI Context Help

From that list we might want to take action on a VM So lets see the commandarguments for VMs

1 Execute

HOL-1730-USE-2

Page 36HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 37: Lab Overview - HOL-1730-USE-2

photon vm -h

As we go through the module use the help command to see details of the actualcommands you are executing

Create Tenant

Photon Platform implements a hierarchical tenant model Tenants represent asegmentation between companies business units or teams Cloud resources areallocated to Tenants using a set of Resource Tickets Allocated resources can be furthercarved up into individual projects within the Tenant

Lets start by creating a new Tenant for our module

1 Execute the following command

photon tenant create lab-tenant

Hit Return on the Security Group Prompt Photon Platform can be deployed usingexternal authentication In that case you would specify the Admin Group for this TenantWe have deployed with no authentication to make the lab a little easier

HOL-1730-USE-2

Page 37HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 38: Lab Overview - HOL-1730-USE-2

Once you have created the Tenant you must set the CLI to execute as that Tenant Youcan do this or refer to the Tenant with CLI command line switches There is an option toenable Authentication using Lightwave the Open Source Identity Management Platformfrom VMware We have not done that in this lab

1 Execute the following command

photon tenant set lab-tenant

Create Resource Ticket

Creating a Resource Ticket specifies a pool of resources that are available to the Tenantand can later be consumed through the placement of workloads in the infrastructure

1 Execute the following command

photon resource-ticket create --name lab-ticket --limits vmmemory 200 GB vm 1000 COUNT

2 To view your Resource Tickets Execute the following command

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this TenantOther resources are unlimited because we have not specified a Limit

3 Also note the Entity UUID printed after the command completes You will useUUIDs to manipulate objects in the system and they can always be found by usingphoton

HOL-1730-USE-2

Page 38HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 39: Lab Overview - HOL-1730-USE-2

entity-type list commands Entity-type can be one of many types like vmimage resource-ticket cluster flavor etc

HOL-1730-USE-2

Page 39HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 40: Lab Overview - HOL-1730-USE-2

Create Project

Tenants can have many Projects In our case we are going to create a single projectwithin the lab-tenant Tenant This project will only be allocated a subset of theresources already allocated to the Tenant Notice that the Tenant has a limit of 200GBand 1000 VMs but the project can only use 100GB and create 500 VMs

1 To create the Project Execute the following command

photon project create --resource-ticket lab-ticket --name lab-project --limits vmmemory 100GB vm 500 COUNT

2 To view your Projects Execute the following command

photon project list

Notice that you can see the Limit that was set and the actual Usage of the allocatedresources

3 To Set the CLI to the Project Execute the following command

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume thoseresources Now we will move on to create objects within the Project

HOL-1730-USE-2

Page 40HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 41: Lab Overview - HOL-1730-USE-2

Set Up Cloud VM Operational ElementsThrough Definition of Base ImagesFlavors Networks and Persistent DisksPhoton Platform includes centralized management of base images used for VM creationYou will be introduced to managing those images VM and disk profiles are abstracted

through a concept called Flavors You will see how to define those flavors as well asuse them to create VMs and Persistent disks You will create a network and combine itwith a Flavor and Image to create a VM (Note ESXi Standard networking used in thislab however NSX support is also available)

View Images

Photon Platform provides a centralized image management system Base images areuploaded into the system and can then be used to create VMs within the environmentUsers can upload either an OVA or VMDK file Once a VM is deployed and potentially

modified its disk can be saved as an image in the shared image repository The imagerepository is a set of Datastores defined by the Administrator Datastores can be localor shared storage When a user creates a VM a linked clone is created from the baseimage to provide the new object This copy on write technology means that the newdisk takes up very little space and captures only the disk changes from the originalimage Users can optimize the images for performance or storage efficiency byspecifying whether the image should be copied to Cloud datastores immediately onupload or only when a placement request is executed

1 To see the images already uploaded execute the following command

photon image list

Do not upload an image in this environment because of bandwidth constraints howeverthe command to do it is photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have beenuploaded for you 1) photon-management is the image used to create the originalmanagement plane VMs and any new management VMs that you add in the future 2)kube is the boot image for the nodes in a running Kubernetes Cluster that you will use inModule 3 3) PhotonOS is the latest version of our Photon Linux distro which ships withDocker configured and is optimized for container deployment You will use this imagelater in this module

Each image has a Replication Type EAGER or ON_DEMAND EAGER images are copiedto every datastore tagged as CLOUD so VMs can be cloned very quickly - at the

HOL-1730-USE-2

Page 41HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 42: Lab Overview - HOL-1730-USE-2

expense of storing many copies of the image ON_DEMAND images are downloaded tothe datastore where the scheduler decided on placement at the time of the placementThe creation takes longer but storage usage is more efficient

2 To see more detail on a particular image execute the following command

photon image show UUID of image UUID of the image is in the photon image list commandresults

HOL-1730-USE-2

Page 42HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 43: Lab Overview - HOL-1730-USE-2

View Flavors

Flavors need a bit of explanation There are three kinds of Flavors in Photon PlatformVM Ephemeral Disk and Persistent disk Flavors Ephemeral disks are what you are usedto with your current ESXi environment They are created as part of the VM create andtheir lifecycle is tied to the VM

Persistent disks can be created independently from any VM and then subsequentlyattacheddetached A VM can be created a persistent disk attached then if the VMdies the disk could be attached to another VM

Flavors define the size of the VMs (CPU and RAM) but also define the characteristics ofthe storage that will be used for ephemeral (Boot) disks and persistent storage volumes

You will specify the vm and disk flavors as part of the VM or Disk creation command

1 To view existing Flavors Execute the following command

photon flavor list

In our environment we have created specific VM flavors to define the size of ourKubernetes Master and Worker node vms Notice that the Master node Flavor will createa larger VM than the other Flavors

Create New Flavors

We are going to create 1 of each type of Flavor to be used in this module

1 Execute

photon -n flavor create -n my-vm -k vm -c vmcpu 1 COUNTvmmemory 1 GB

HOL-1730-USE-2

Page 43HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 44: Lab Overview - HOL-1730-USE-2

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2 Execute

photon -n flavor create -n my-pers-disk -k persistent-disk -c persistent-disk 10 COUNT

This Flavor could have been tagged to match tags on Datastores so that storage Profilesare part of the Disk placement In this case we have simply added a COUNT This couldbe used as a mechanism for capturing Cost as part of a Chargeback process

3 Execute

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c ephemeral-disk 10 COUNT

4 To easily see the Flavors you just created execute

photon flavor list |grep my-

Create Networks

By default Photon Controller will discover the available networks on your Cloud Hostsand choose one of them for VM placement To limit the scope of this discovery you cancreate a network object and reference it when creating a vm or cluster This networkobject is also the basis for creating logical networks with NSX That functionality will beavailable shortly after VMworld 2016 In our lab environment there is only onePortgroup available so you wouldnt actually need to specify a network in your VMcreate command but we are going to use it to show the functionality We have alreadycreated this network for you

1 If you needed to create a network you would issue the following commandphoton network create -n lab-network -p ldquoVM Networkrdquo -d ldquoMy cloud Networkrdquo

The -p option is a list of the portgroups that you want to be used for VM placement Itsessentially a whitelist of networks available to the scheduler when evaluating where toplace a VM The -d option is just a description of your network

HOL-1730-USE-2

Page 44HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 45: Lab Overview - HOL-1730-USE-2

2 To easily see the Network we have created execute

photon network list

HOL-1730-USE-2

Page 45HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 46: Lab Overview - HOL-1730-USE-2

Create VM

We are now ready to create a VM using the elements we have gone through in theprevious steps

1 Execute the following command

photon vm create --name lab-vm1 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

Note You can get the UUID of your network with the command photon networklist and the UUID of your image with the command photon image list

Lets break down the elements of this command --name is obvious its the name ofthe VM --flavor says to use the my-vm flavor you defined above to size the RAM andvCPU count --disks is a little confusing disk-1 is the name of the ephemeral disk thatis created It will be created using the my-eph-disk flavor you created earlier We didntdo much with that flavor definition however it could have defined a Cost forChargeback or been tagged with a storage profile The tag would have been mappedto a datastore tag and would be part of the scheduling constraints used during VMplacement Boot=true means that this is the boot disk for this VM -w is optional andcontains the UUID of the network you just created -i is the UUID of the Image that youwant to use In this case we want to the PhotonOS image To get the UUID of theimage execute photon image list

Create a Second VM

This VM will be used later in the lab but its very easy to create now

2 Execute the following command

photon vm create --name lab-vm2 --flavor my-vm --disks disk-1 my-eph-disk boot=true -w UUIDof your Network -i UUID of your PhotonOS image

HOL-1730-USE-2

Page 46HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 47: Lab Overview - HOL-1730-USE-2

Note The easiest way to create this is to hit Up Arrow on your keyboard to get to theprevious photon vm create command Then hit left arrow key until you get to the nameand change the 1 to a 2 Finally hit Return to execute

Start VM

The VMs were created but not powered on We want to power on the first VM only Thesecond VM needed to be powered off for now

1 To start the VM execute

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output You can also get itby executing photon vm list

HOL-1730-USE-2

Page 47HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 48: Lab Overview - HOL-1730-USE-2

Show VM details

More information about the VM can be found using the show command

1 To show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP The IP metadata takesa couple of minutes to migrate from ESXi into the Photon Platform Cloudstore so youmay not see it right away even if you see it through the vSphere Client

HOL-1730-USE-2

Page 48HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 49: Lab Overview - HOL-1730-USE-2

Stop VM

We are going to shutdown the VM in order to attach a Persistent Disk to it Our bootimage is not configured to support hot add of storage so we will shut the VM down first

1 To Stop the VM Execute

photon vm stop UUID of lab-vm1

HOL-1730-USE-2

Page 49HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 50: Lab Overview - HOL-1730-USE-2

Persistent Disks

So far we have created a VM with a single Ephemeral disk If we delete the VM the diskis deleted as well In a Cloud environment there is the need to have ephemeral VMsthat may be createddestroyed frequently but need access to persistent dataPersistent Disks are VMDKs that live independently of individual Virtual Machines They

can be attached to a VM and when that VM is destroyed can be attached to anothernewly created VM We will also see later on that Docker Volumes can be mapped tothese disks to provide persistent storage to containers running in the VM Lets create apersistent disk

1 To Create a persistent disk Execute

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Lets look at the details --name is the name of the disk --flavor says to use the my-pers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB

2 More information about the disk can be found using

photon disk show UUID of the Disk

Notice that the disk is DETACHED meaning it is not associated with any VM LetsATTACH it to our VM

Attach Persistent Disk To VM

Now we will attach that newly created persistent disk to the VM we created previously

HOL-1730-USE-2

Page 50HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 51: Lab Overview - HOL-1730-USE-2

1 To find the VM UUID Execute

photon vm list

2 To find the Disk UUID Execute

photon disk list

3 To attach the disk to the VM Execute

photon vm attach-disk ldquouuid of lab-vm1rdquo --disk ldquouuid of diskrdquo

HOL-1730-USE-2

Page 51HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 52: Lab Overview - HOL-1730-USE-2

Show VM Details

Now we will see the attached Disk using the VM Show command again

1 To Show VM details execute

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk)and disk-2 (your newly added persistent disk) are attached to the VM

HOL-1730-USE-2

Page 52HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 53: Lab Overview - HOL-1730-USE-2

Map Persistent Disks To DockerVolumes To Enable Container RestartAcross HostsPersistent Disks are different from standard vSphere ephemeral disks in that they arenot tied to the lifecycle of a VM You will use your previously created persistent disk tostore Web content for Nginx Web content stored in an individual container is static Itmust be manually updated or files must be copied in to each container that mightpresent it Our content will be presented to the containers through Docker volumes thatwill be mounted on our persistent disk So it can be changed in one place and madeavailable wherever we present it We will make changes to the content on one Dockerhost then attach the disk to a new host and create a new container on that host Thewebsite on that host will reflect the changed content Docker volumes provide theability to persist disks across containers Photon Platform persistent disks extend thatcapability across Docker hosts

HOL-1730-USE-2

Page 53HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 54: Lab Overview - HOL-1730-USE-2

Deploy Nginx Web Server

We will use your two previously created VMs lab-vm1 and lab-vm2 for these exercisesLets start the VM and get the IP address for lab-vm1

1 To find the vm UUID Execute

photon vm list

2 To start lab-vm1 Execute

photon vm start UUID of lab-vm1

2 To find the vm IP for lab-vm1 Execute

photon vm networks UUID of lab-vm1

Note It may take a couple of minutes for the IP address to be updated in the PhotonController Meta Data and appear in this command Keep trying or log into vCenter andgrab the IP from there

HOL-1730-USE-2

Page 54HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 55: Lab Overview - HOL-1730-USE-2

Connect to lab-vm1

1 From the CLI execute

ssh rootIP of lab-vm1 password is VMware1

HOL-1730-USE-2

Page 55HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 56: Lab Overview - HOL-1730-USE-2

Setup filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you

1 To set up the filesystem Execute

mount-disk-lab-vm1sh

2 You will see that the device devsdb is mounted at mntdockervolume This isthe Persistent disk you previously created

Create The Nginx Container With Docker Volume

We will now create an Nginx container on our Docker host (lab-vm1) The container willhave a volume called volume that is mounted on mntdockervolume from the hostThis means that any changes to volume from the container will be persisted on our

physical persistent disk

HOL-1730-USE-2

Page 56HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 57: Lab Overview - HOL-1730-USE-2

1 To create the nginx container Execute

docker run -v mntdockervolumevolume -d -p 8080 192168120205000nginx

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation Notice that the image is specified as IPportimage This is becausewe are using a local Docker registry and have tagged the image with the ip address andport of the registry

HOL-1730-USE-2

Page 57HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 58: Lab Overview - HOL-1730-USE-2

Verify Webserver Is Running

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm1 The IP may be different from the one in theimage above It is the same IP you used in the previous ssh command from the CLIThe

default http port is 80 so you do not need to enter it You should see the Nginxhomepage

Modify Nginx Home Page

We will copy the Nginx default home page to our Docker volume and modify it Once wehave done that we will move the disk to a new VM Create a new container with DockerVolume and verify that the changes we made have persisted

1 Connect to your running container From the CLI you should still have have anssh connection to lab-vm1 Execute

docker exec -it ldquofirst3CharsOfcontainerIDrdquo bash

This command says to connect to the container through an interactive terminal and runa bash shell You should see a command prompt within the container If you cannot findyour containerID Execute docker ps to find it

2 To see the filesystem inside the container and verify your Docker volume(volume) Execute

HOL-1730-USE-2

Page 58HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 59: Lab Overview - HOL-1730-USE-2

df

3 We want to copy the Nginx home page to our Persistent disk Execute

cp usrsharenginxhtmlindexhtml volume

4 To Exit the container Execute

exit

Edit The Indexhtml

You will use the vi editor to make a change to the indexhtml page If you arecomfortable with vi and html then make whatever modifications you want These arethe steps for a very simple modification

1 Execute

vi mntdockervolumeindexhtml

2 Press the down arrow until you get to the line 14 with Welcome To Nginx

3 Press right arrow until you are at the character N in Nginx

4 Press the cw keys to change word and type the Hands On Lab At VMWORLD2016

5 Press the esc key and then key

6 At the prompt enter wq to save changes and exit vi

HOL-1730-USE-2

Page 59HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 60: Lab Overview - HOL-1730-USE-2

7 At the Linux Prompt Type exit to close the ssh session You are now back inthe Photon CLI

Detach The Persistent Disk

We now want to remove this disk from the VM Remember that detaching the disk doesnot delete it Detach the Persistent Disk from lab-vm1

1 To get the UUID of the lab-vm1 Execute

photon vm list

2 To get the UUID of the Persistent Disk Execute

photon disk list

3 Execute

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

HOL-1730-USE-2

Page 60HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 61: Lab Overview - HOL-1730-USE-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of thedisk with photon disk list commands

Attach The Persistent Disk To New VM

You will attach the persistent disk to the lab-vm2 VM you created earlier

1 To get the UUID of lab-vm2 Execute

photon vm list

2 To attach the disk to lab-vm2 Execute

photon vm attach-disk ldquouuid of lab-vm12rdquo --disk ldquouuid of diskrdquo

Start and Connect to lab-vm2

1 To start the VM lab-vm2 Execute

photon vm start UUID lab-vm2

2 To get the network IP of lab-vm2 Execute

photon vm networks UUID lab-vm2

HOL-1730-USE-2

Page 61HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 62: Lab Overview - HOL-1730-USE-2

Note You may have to wait a minute or two for the IP to appear If you are impatientyou can open the vsphere client and get it there

3 From the CLI execute

ssh rootIP of lab-vm2 password is VMware1

HOL-1730-USE-2

Page 62HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 63: Lab Overview - HOL-1730-USE-2

Setup Filesystem

The storage device is attached to the VM however we still need to format the disk andmount the filesystem We have provided a script to execute these steps for you Notethat you must run mount-disk-lab-vm2sh not mount-disk-lab-vm1sh on this vmmount-disk-lab-vm1sh will reformat the disk and you will not see the changes you

made

1 To set up the filesystem Execute

mount-disk-lab-vm2sh

You will see that the device devsdb is mounted at mntdockervolume

Create The New Nginx Container

We will now create a New Nginx container on our second Docker host (lab-vm2) Thiscontainer will have a volume called usrsharednginxhtml that is mounted on mntdockervolume from the host Nginx uses usrsharednginxhtml as the default path forits configuration files So our changed home page on the persistent disk will be used asthe default page

1 To create the nginx container Execute

docker run -v mntdockervolumeusrsharenginxhtml -d -p 8080 192168120205000nginx

To return to the Photon CLI type exit

HOL-1730-USE-2

Page 63HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 64: Lab Overview - HOL-1730-USE-2

Lets look at this command docker run creates a container The -v says to create aDocker volume in the container that is mounted on mntdockervolume from the hostThe -d means to keep the container running until it is explicitly stopped The -p maps

container port 80 to port 80 on the host So you will be able to access the Nginx WebServer on port 80 from your browser Lastly nginx is the Docker image to use forcontainer creation It resides on a local Docker Registry we created on 19216812020port 5000 Extra Credit From the CLI Execute docker ps and you will see the DockerRegistry we are using

HOL-1730-USE-2

Page 64HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 65: Lab Overview - HOL-1730-USE-2

Verify That Our New Webserver Reflects Our Changes

You should see the New Nginx homepage on the IP of lab-vm2

1 Open one of the Web Browsers on the desktop

2 Enter the IP address of lab-vm2 The default http port is 80 so you do not needto enter it You should see the modified Nginx homepage

Clean Up VMs

Our lab resources are very constrained In order to complete Module 3 you will need todelete the two VMs you created in this part of the lab

1 To delete a VM Execute

photon vm list

note the UUIDs of the two VMs

2 Execute

photon vm stop UUID of lab-vm2

3 Execute

HOL-1730-USE-2

Page 65HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 66: Lab Overview - HOL-1730-USE-2

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4 Execute

photon vm delete UUID of lab-vm2

5 Repeat steps 2 and 4 for lab-vm1

HOL-1730-USE-2

Page 66HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 67: Lab Overview - HOL-1730-USE-2

Monitor and Troubleshoot PhotonPlatformPhoton Platform can be configured to push logs to any syslog server endpoint We haveconfigured this deployment for LogInsight You will troubleshoot a failure in VMdeployment using LogInsight and will monitor your infrastructure through integrationwith Graphite and Grafana

HOL-1730-USE-2

Page 67HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 68: Lab Overview - HOL-1730-USE-2

Enabling Statistics and Log Collection

Photon platform provides the capability to push log files to any Syslog serverInfrastructure statistics can also be captured and pushed to a monitoring endpointBoth of these are enabled during control plane deployment In this example we are

pushing statistics to a Graphite server and then using a visualization tool called Grafanato provide some nicer graphs Our Syslog server in this lab is LogInsight

Monitoring Photon Platform With Graphite Server

Lets start by seeing what statistics are available from Photon In this Pre-GA version weare primarily capturing ESXi performance statistics but will enhance this over time

HOL-1730-USE-2

Page 68HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 69: Lab Overview - HOL-1730-USE-2

1 Connect to the Graphite Server by opening a browser

2 Select the Graphite Browser Bookmark from the Toolbar

HOL-1730-USE-2

Page 69HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 70: Lab Overview - HOL-1730-USE-2

Expand To View Available Metrics

Expand the Metrics folder and then select the Photon Folder You can see Two ESXiHosts and statistics for CPU Memory Storage and Networking

1 Expand cpu and select usage

2 Expand mem and select usage

If you do not see any data this is because the photon controller agent plugin on yourhosts did not start correctly when the lab deployed Perform the following Step Only ifno data displayed in Graphite

No Performance Data in Graphite

If you saw performance data in Graphite then skip to step View Graphite Data ThroughGrafana

You will ssh into our two esxi hosts and restart the photon controller agent process Ifyou are seeing performance data from only one host then only restart that hosts agent

HOL-1730-USE-2

Page 70HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 71: Lab Overview - HOL-1730-USE-2

1 Login to the the PhotonControllerCLI through Putty

2 From the PhotonControllerCLI Execute

ssh root192168110201 password is VMware1

3 Execute

etcinitdphoton-controller-agent restart

4 Execute

exit

5) repeat steps 2-4 for host 192168110202

It will take a couple of minutes for the stats to begin showing up in the browser Youmay need to refresh the page You may also want to jump to the LogInsight Section ofthe lab and come back here if you dont want to wait for the stats to collect

HOL-1730-USE-2

Page 71HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 72: Lab Overview - HOL-1730-USE-2

View Graphite Data Through Grafana

Graphite can also act as a sink for other visualization tools In this case we will take thedata from Graphite and create a couple of charts in Grafana

1 From your browser Select the Grafana Bookmark from the toolbar

Graphite Data Source For Grafana

We have previously set up Graphite as the source for Data used by Grafana To see thissetup

1 Click on Data Sources We simply pointed to our Graphite Server Endpoint

Create Grafana Dashboard

Grafana has the capability to create a lot of interesting graphics That is beyond thescope of this lab but feel free to play and create whatever you want We will create asimple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite

HOL-1730-USE-2

Page 72HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 73: Lab Overview - HOL-1730-USE-2

1 Click on Dashboards

2 Click on Home

3 Click on New

HOL-1730-USE-2

Page 73HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 74: Lab Overview - HOL-1730-USE-2

Add A Panel

1 Select the Green tab

2 Add Panel

3 Graph

Open Metrics Panel

This is not intuitive but you must click where it says Click Here and then Click Edit toadd metrics

Add Metrics To Panel

1 Select Select Metrics and select photon

HOL-1730-USE-2

Page 74HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 75: Lab Overview - HOL-1730-USE-2

2 Select Select Metrics again and select one of the esxi hosts (This is the sameHierarchy you saw in Graphite) Continue selecting until your metrics look like this

This is a pretty straightforward way to monitor performance of Photon Platformresources

HOL-1730-USE-2

Page 75HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 76: Lab Overview - HOL-1730-USE-2

HOL-1730-USE-2

Page 76HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 77: Lab Overview - HOL-1730-USE-2

Troubleshooting Photon Platform With LogInsight

We will try to create a VM that needs more resource than is available in ourenvironment The create task will error out Rather than search through individual logfiles we will use LogInsight to see more information

1 Execute the following command

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks disk-1 cluster-vm-diskboot=true -w UUID of your Network -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory We do not have thatavailable on our Cloud hosts so it will fail The error message here tells us the problembut we want to walk through the process of getting more detail from the logs

2 Note the Task ID from the Create command We are going to use that in aLogInsight Query

HOL-1730-USE-2

Page 77HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 78: Lab Overview - HOL-1730-USE-2

Connect To Loginsight

1 From Your browser select the LogInsight Bookmark from the toolbar and loginas User admin password VMware1

Query For The Create Task

Once you Login you will see the Dashboard screen

1 Click on Interactive Analytics

2 Paste the Task ID into Filter Field

3 Change the Time Range to Last Hour of Data

4 Click the Search Icon

You can look through these task results to find an error More interesting is lookingthrough RequestIDs

5 In Photon Platform every Request through the API gets a requestID Therecould be many ReqIDs that are relevant to a task It takes a little work to see the rightentries to drill into For instance this entry shows an error but the RequestID is relatedto querying the CloudStore for the Task So you see the Create VM task itself was in

HOL-1730-USE-2

Page 78HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 79: Lab Overview - HOL-1730-USE-2

error but the RequestID is for a request that was successful (querying the task info) Sowe need to scroll for a more interesting request

HOL-1730-USE-2

Page 79HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 80: Lab Overview - HOL-1730-USE-2

Browse The Logs For Interesting Task Error Then FindRequestID

1 Scroll down in the Log and look for RESERVE_RESOURCE

2 Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different but you should see something similar

HOL-1730-USE-2

Page 80HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 81: Lab Overview - HOL-1730-USE-2

Search The RequestID For RESERVE_RESOURECE

Once you click on the Search Icon you will see log hits for that RequestID These areactual requests made by the Photon Controller Agent Running on the ESXi hosts In thiscase the Agent Request Errors were surfaced to the task level so there isnt a lot ofadditional information but that is not always true In many instances the requestID willprovide new data to root cause the initial Task Failure This is especially useful as thescale of your system grows

HOL-1730-USE-2

Page 81HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 82: Lab Overview - HOL-1730-USE-2

ConclusionThe operational model for Cloud Native infrastructure is dramatically different fromtraditional platform 2 kinds of environments The expectation is that the control planewill be highly scalable supporting both large numbers of physical hosts as well as highchurn-transient work loads The application frameworks handling applicationprovisioning and availability removing that requirement from the infrastructure Theapplications are very dynamic and infrastructure must be consumable throughprogrammatic methods rather than traditional Admin Interfaces In this module youhave been introduced to Photon Platform Multi-tenancy and its associated model formanaging resources at scale You have also seen the API consumed in this instancethrough the Command Line Interface You have also seen how storage persistence inthe infrastructure can add value to Microservice applications that take advantage ofDocker containers Finally you have been exposed to monitoring and troubleshooting ofthis distributed environment

HOL-1730-USE-2

Page 82HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 83: Lab Overview - HOL-1730-USE-2

Module 3 - ContainerOrchestration

Frameworks with PhotonPlatform (45 minutes)

HOL-1730-USE-2

Page 83HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 84: Lab Overview - HOL-1730-USE-2

IntroductionThis module provides an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands (as seen in the previous module) but through the use of higherlevel frameworks that provide orchestration of the entire application Orchestrationcould include application deployment restart on failure as well as updownscaling ofapplications instances In this module you will focus on container frameworks thatmanage micro service applications running on Photon Platform You will build anddeploy a simple web application using Opensource Kubernetes and Docker You willalso see how orchestration at scale can be administered through a tool like Rancher

1) Container Orchestration With Kubernetes on Photon Platform

We have provided a small Kubernetes cluster deployed on Photon Platform Youwill see the process for deploying Opensource Kubernetes on Photon Platform but Dueto timing and resource constraints in the lab we could not create it as part of the labYou will deploy the Nginx Webserver application (Manually deployed in Module Two) viaKubernetes You will verify that multiple instances have been deployed and see how toscale additional instances You will kill an instance of the webserver and see thatkubernetes detects the failure and restarts a new container for you

2) Container Orchestration with Rancher on Photon Platform

Rancher is another Opensource Container management platform You will see howthe Rancher UI allows you to provision Docker-Machine nodes on Photon platform anddeploy will then deploy an Nginx Webserver onto the Docker hosts Rancher providesthat higher level container orchestration and takes advantage of the resource andtenant isolation provided by the underlying Photon Platform

HOL-1730-USE-2

Page 84HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 85: Lab Overview - HOL-1730-USE-2

Container Orchestration WithKubernetes on Photon PlatformWe have provided a small Kubernetes cluster deployed on Photon Platform You will seethe process for deploying Opensource Kubernetes on Photon Platform but due to timingand resource constraints in the lab we could not create it as part of the lab You willdeploy the NginxRedis application (Manually deployed in Module Two) via KubernetesYou will verify that multiple instances have been deployed and see how to scale

additional instances You will kill an instance of the webserver and see that kubernetesdetects the failure and restarts a new container for you Also troubleshoot the outagevia LogInsight

Kubernetes Deployment On Photon Platform

Photon Platform provides two methods for deploying Kubernetes Clusters The firstmethod is an opinionated deployment where we have pre-defined all of the elements ofthe deployment We will briefly look at the CLI commands to support this

1) From the Windows Desktop login to the PhotonControllerCLI VM SSH key login hasbeen enabled but if you have a problem the password is vmware

HOL-1730-USE-2

Page 85HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 86: Lab Overview - HOL-1730-USE-2

Photon Cluster Create Command

The CLI supports a Cluster Create command This command allows you to specify thecluster type (Kubernetes Mesos Swarm are currently supported) and size of the clusterYou will also provide additional IP configuration information Photon Platform will

Create the Master and Worker node VMs configure the services (for Kubernetes in thisexample) setup the internal networking and provide a running environment with asingle command We are not going to use this method in the lab If you try to create aCluster you will get an error because there is not enough resource available to createmore VMs

Example photon cluster create -n Kube5 -k KUBERNETES --dns ldquodns-Serverrdquo --gatewayldquoGatewayrdquo --netmask ldquoNetmaskrdquo --master-ip ldquoKubermasterIPrdquo --container-networkldquoKubernetesContainerNetworkrdquo --etcd1 ldquoStaticIPrdquo -w ldquouuid demo networkrdquo -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes We arespecifying the networking configuration for the Kuberetes Master VM and a separateetcd VM (etcd is a backing datastore that holds networking information used by Flannelinternal to Kubernetes) The Worker node VMs will receive IPs from DHCP You willspecify the network on which to place these VMs through the -w option and -s is thenumber of Worker nodes in the cluster The Kubernetes container network is a privatenetwork that is used by Flannel to connect Containers within the Cluster

1 To see the command syntax Execute

photon cluster create -h

HOL-1730-USE-2

Page 86HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 87: Lab Overview - HOL-1730-USE-2

Kube-Up On Photon Platform

You just saw the Photon Cluster Create command This is an easy way to get a clusterup and running very quickly and also provides capability to scale it up as neededAwesome for a large number of use cases but you probably noticed that there is no

way to customize it beyond the parameters provided in the command line What if youwant a different version of Kubernetes or Docker within the VMs How about replacingFlannel with NSX for networking or using a different Operating System in the NodesThese are not easily done with Cluster Create at this point We have provided a

second option for creating the cluster We have modified Open Source Kubernetesdirectly to support Photon Platform

Your process for deploying the cluster is to clone the Kubernetes Repo from github buildit and run the kube-up command while passing in the environment variable that tells itto use our deployment scripts This allows you complete freedom to configure thecluster however you want

Our Lab Kubernetes Cluster Details

We have created a Kubernetes Cluster with one Master and 2 Worker nodes You arewelcome to take a look at the configuration files in ~kubernetesclusterphoton-controller You can look through the config-default and config-common files to see howsome of the configuration is done

1 Lets take a look at the VMs that make up our cluster Execute

photon tenant set kube-tenant

This points to the kube tenant that we created for our cluster For details on tenantsand projects return to module 1

2 To set our kube project Execute

photon project set kube-project

3 To see our VMs Execute

photon vm list

HOL-1730-USE-2

Page 87HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 88: Lab Overview - HOL-1730-USE-2

You can see that our cluster consists of one Master VM and 2 Worker VMs Kuberneteswill create Pods that are deployed as Docker containers within the Worker VMs

HOL-1730-USE-2

Page 88HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 89: Lab Overview - HOL-1730-USE-2

Basic Introduction To Kubernetes Application Components

Before we deploy the app let get a little familiarity with Kubernetes concepts This is notmeant to be a Kubernetes tutorial but to get you familiar with the pieces of ourapplication A node represents the Worker nodes in our Kubernetes Cluster

Kubernetes has a basic unit of work called a Pod A Pod is a group of related containersthat will be deployed to a single Node you can generally think of a Pod as the set ofcontainers that make up an application You can also define a Service that acts as aLoad Balancer across a set of containers Lastly Replication Controllers facilitatereplicated pods and are responsible for maintaining the desired number of copies of aparticular Pod In our application you will deploy 3 replicated copies of the NginxWebserver with a frontend Service The command line utility for managing Kubernetesis called kubectl Lets start by looking at the nodes

1 From the CLI VM Execute

kubectl get nodes

You will see the two worker nodes associated with our cluster This is slightly differentfrom seeing the VMs that the nodes run on as you did previously

Deploying An Application On Kubernetes Cluster

Our application is defined through 3 yaml files One for each of the Pod ReplicationController and Service These files provide the configuration Kubernetes uses to deployand maintain the application

To look at these configuration files

1 Execute

HOL-1730-USE-2

Page 89HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 90: Lab Overview - HOL-1730-USE-2

cat ~demo-nginxnginx-podyaml

2 Execute

cat ~demo-nginxnginx-serviceyaml

3 Execute

cat ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 90HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 91: Lab Overview - HOL-1730-USE-2

Kubectl To Deploy The App

We are now going to deploy the application From the CLI VM

1 To deploy the pod Execute

kubectl create -f ~demo-nginxnginx-podyaml

2 To deploy the service Execute

kubectl create -f ~demo-nginxnginx-serviceyaml

3 To deploy the Replication Controller Execute

kubectl create -f ~demo-nginxnginx-rcyaml

HOL-1730-USE-2

Page 91HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 92: Lab Overview - HOL-1730-USE-2

Kubernetes UI Shows Our Running Application

After you have deployed your application you can view it through the Kubernetes UI

1 Open your Web Browser and enter https192168100175ui If you areprompted for username and password they are admin4HjyqnFZK4tntbUZ Sorry aboutthe randomly generated password You may get an invalid certificate authority errorClick on Advanced and Proceed to the site

nginx-demo is your application

2 Note the port number for the External endpoint We will use it in a couple ofsteps

HOL-1730-USE-2

Page 92HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 93: Lab Overview - HOL-1730-USE-2

Application Details

1 Click on the 3 dots and select View Details to see what you have deployed

HOL-1730-USE-2

Page 93HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 94: Lab Overview - HOL-1730-USE-2

Your Running Pods

You can see the Replication Controller is maintaining 3 Replicas They each have theirown internal IP and are running on the 2 Nodes 3 Replicas is not particularly usefulgiven that we have only 2 Nodes but the concept is valid Explore the logs if you areinterested

We can connect to the application directly through the Node IP and the port number wesaw earlier

HOL-1730-USE-2

Page 94HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 95: Lab Overview - HOL-1730-USE-2

Connect To Your Application Web Page

Now lets see what our application does We will choose one of the node IP addresseswith the port number shown earlier to see our nginx webserver homepage Its just asimple dump of the application configuration info

1 From your browser Connect to http192168100176portnumber Notethat your port number may be different than the lab manual port number IP will be thesame

HOL-1730-USE-2

Page 95HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 96: Lab Overview - HOL-1730-USE-2

Container Orchestration With DockerMachine Using Rancher on PhotonPlatformRancher is another Opensource Container management platform You will use theRancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts Rancher provides that higherlevel container orchestration and takes advantage of the resource and tenant isolationprovided by the underlying Photon Platform

Login To Photon ControllerCLI VM

1 Open Putty from the desktop and Click on PhotonControllerCLI link2 Click on Open

HOL-1730-USE-2

Page 96HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 97: Lab Overview - HOL-1730-USE-2

Deploy Rancher Server

You will first deploy a new version of the Rancher Server container into our environmentBefore that you need to delete the existing container

1 Execute docker ps | grep rancherserver to see the running container Find theContainer ID for the RancherServer container That is the one we want toremove

2 Execute docker kill ContainerID This will remove the existing RancherServer container

3 Execute 885 This will execute command number 885 stored in Linux historyIt will create a new Docker container

Note that your new container is tagged with 192168120205000 This is the localDocker Registry that is used to serve our labs images

HOL-1730-USE-2

Page 97HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 98: Lab Overview - HOL-1730-USE-2

Clean Up Rancher Host

The VM that we will use as a Rancher Host (more explanation below) needs have a fewfiles removed prior to deploying the Rancher Agent

1 Execute ssh root192168100201 The password is vmware2 Execute rm -rf varlibrancherstate3 Execute docker rm -vf rancher-agent4 Execute docker rm -vf rancher-agent-state

HOL-1730-USE-2

Page 98HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 99: Lab Overview - HOL-1730-USE-2

Connect To Rancher UI

Now we can add a Rancher host Rancher server is running in a container on19216812020 You can connect from your browser at https192168120208080Rancher hosts are VMs running Docker This will be where application containers are

deployed Much like Kubernetes Worker nodes you saw in the previous section We willfirst add a Rancher host The host is a VM that we previously created for you

1 From your browser

Connect to https192168120208080 and then click Add Host

2 If you get this page just click Save

HOL-1730-USE-2

Page 99HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 100: Lab Overview - HOL-1730-USE-2

HOL-1730-USE-2

Page 100HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 101: Lab Overview - HOL-1730-USE-2

Add Rancher Host

Rancher has several options for adding hosts There are a couple of direct drivers forcloud platforms as well as machine drivers supported through Docker Machine pluginsThere is a Docker Machine Plugin for Photon Controller available In this lab we areusing the Custom option to show you how to manually install the Rancher Agent on yourHost VM and see it register with Rancher Server

1 Note that the Custom icon is selected2 Cut the pre-formed Docker run command by dragging the mouse over the

command and doing a Ctrl-C or click the Copy to Clipboard icon at the right ofthe box

HOL-1730-USE-2

Page 101HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 102: Lab Overview - HOL-1730-USE-2

Paste In The Docker Run Command To Start Rancher Agent

Go back to the Putty session You should still be connected to your Rancher Host VMYou will now paste in the Docker Run command you captured from the Rancher UI

Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command lineNote You must cutpaste the command from the Rancher UI and not use the command

in the image The registration numbers are specific to your host

1 Execute Either Right Click of the mouse or Ctrl-v and hit Return

View the Agent Container

To view your running container

1 Execute docker ps

HOL-1730-USE-2

Page 102HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 103: Lab Overview - HOL-1730-USE-2

Verify New Host Has Been Added

To view your new host return to the Rancher UI in your browser

1 Click the Close button2 Click on Infrastructure and Hosts3 This is your host

HOL-1730-USE-2

Page 103HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 104: Lab Overview - HOL-1730-USE-2

HOL-1730-USE-2

Page 104HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 105: Lab Overview - HOL-1730-USE-2

Deploy Nginx Webserver

To deploy our application we are going to create an Nginx Container Service Servicesin Rancher can be a group of containers but in this case we will be deploying a singlecontainer application

1 Click on Containers

2 Click on Add Container

Configure Container Info

We need to define the container we want to deploy

1 Enter a Name for your container

2 Specify the Docker Image that you will run This image is in a local Registry sothe name is the IPportimage-name Enter 192168120205000nginx

3 This image is already cached locally on this VM so uncheck the box to Pull thelatest image

HOL-1730-USE-2

Page 105HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 106: Lab Overview - HOL-1730-USE-2

4 We now want to map the container port to the host port that will be used toaccess the Webserver Nginx by default is listening on Port 80 We will map it to Hostport 2000 Note that you might have to click on the + Portmap sign to see these fields

5 Click on Create Button

It may take a minute or so for the container to come up Its possible the screen will notupdate so try holding Shift-Key while clicking Reload on the browser page

HOL-1730-USE-2

Page 106HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 107: Lab Overview - HOL-1730-USE-2

Container Information

1 Once your container is running Check out the performance charts

2 Note that the you can see the container status Its internal IP address - this is aRancher managed network that containers communication on

Open Your Webserver

From you Browser Enter the IP address of the Rancher Host VM and the Port youmapped

1 From your Internet Browser enter 1921681002012000 to view the defaultNginx webpage

HOL-1730-USE-2

Page 107HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 108: Lab Overview - HOL-1730-USE-2

Rancher Catalogs

Rancher also provides the capability to deploy multi-container applications in catalogsthat are provided directly by the application vendors Browse through some of theavailable applications You will not be able to deploy them because the lab does nothave an external internet connection

HOL-1730-USE-2

Page 108HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 109: Lab Overview - HOL-1730-USE-2

ConclusionThis module provided an introduction to the operational model for developers of cloudnative applications Deploying containers at scale will not be done through individualDocker run commands but through the use of higher level frameworks that provideorchestration of the entire application

You have seen two examples of application frameworks that can be used to deploy andmanage containers at scale You have also seen that Photon Platform provides ascalable underpinning to these frameworks

HOL-1730-USE-2

Page 109HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion
Page 110: Lab Overview - HOL-1730-USE-2

ConclusionThank you for participating in the VMware Hands-on Labs Be sure to visithttpholvmwarecom to continue your lab experience online

Lab SKU HOL-1730-USE-2

Version 20161024-114606

HOL-1730-USE-2

Page 110HOL-1730-USE-2

  • Table of Contents
  • Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform
    • Lab Guidance
      • Location of the Main Console
      • Activation Prompt or Watermark
      • Alternate Methods of Keyboard Data Entry
      • Click and Drag Lab Manual Content Into Console Active Window
      • Accessing the Online International Keyboard
      • Click once in active console window
      • Click on the key
      • Look at the lower right portion of the screen
          • Module 1 - What is Photon Platform (15 minutes)
            • Introduction
            • What is Photon Platform - How Is It Different From vSphere
              • Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap Not all are implemented in the Pre-GA Release)
                • Cloud Administration - Multi-Tenancy and Resource Management
                  • Connect To Photon Platform Management UI
                  • Photon Controller Management UI
                  • The Control Plane Resources
                  • Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen
                  • Control Plane Services
                  • Cloud Resources
                  • Tenants
                  • Our Kubernetes Tenant
                  • Kube-Tenant Detail
                  • Kube-Project Detail
                  • Kube Tenant Resource-Ticket
                  • Create Resource-Ticket
                    • Cloud Administration - Images and Flavors
                      • Images
                      • Kube-Image
                      • Flavors
                      • Kube-Flavor
                      • Ephemeral Disk Flavors
                      • Persistent Disk Flavors
                        • Conclusion
                          • Youve finished Module 1
                          • How to End Lab
                              • Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)
                                • Introduction
                                • Multi-Tenancy and Resource Management in Photon Platform
                                  • Login To CLI VM
                                  • Verify Photon CLI Target
                                  • Execute This Step Only If You Had photon HTTP Errors In The Previous Step
                                  • Photon CLI Overview
                                  • Photon CLI Context Help
                                  • Create Tenant
                                  • Create Resource Ticket
                                  • Create Project
                                    • Set Up Cloud VM Operational Elements Through Definition of Base Images Flavors Networks and Persistent Disks
                                      • View Images
                                      • View Flavors
                                      • Create New Flavors
                                      • Create Networks
                                      • Create VM
                                      • Create a Second VM
                                      • Start VM
                                      • Show VM details
                                      • Stop VM
                                      • Persistent Disks
                                      • Attach Persistent Disk To VM
                                      • Show VM Details
                                        • Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts
                                          • Deploy Nginx Web Server
                                          • Connect to lab-vm1
                                          • Setup filesystem
                                          • Create The Nginx Container With Docker Volume
                                          • Verify Webserver Is Running
                                          • Modify Nginx Home Page
                                          • Edit The Indexhtml
                                          • Detach The Persistent Disk
                                          • Attach The Persistent Disk To New VM
                                          • Start and Connect to lab-vm2
                                          • Setup Filesystem
                                          • Create The New Nginx Container
                                          • Verify That Our New Webserver Reflects Our Changes
                                          • Clean Up VMs
                                            • Monitor and Troubleshoot Photon Platform
                                              • Enabling Statistics and Log Collection
                                              • Monitoring Photon Platform With Graphite Server
                                              • Expand To View Available Metrics
                                              • No Performance Data in Graphite
                                              • View Graphite Data Through Grafana
                                              • Graphite Data Source For Grafana
                                              • Create Grafana Dashboard
                                              • Add A Panel
                                              • Open Metrics Panel
                                              • Add Metrics To Panel
                                              • Troubleshooting Photon Platform With LogInsight
                                              • Connect To Loginsight
                                              • Query For The Create Task
                                              • Browse The Logs For Interesting Task Error Then Find RequestID
                                              • Search The RequestID For RESERVE_RESOURECE
                                                • Conclusion
                                                  • Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)
                                                    • Introduction
                                                    • Container Orchestration With Kubernetes on Photon Platform
                                                      • Kubernetes Deployment On Photon Platform
                                                      • Photon Cluster Create Command
                                                      • Kube-Up On Photon Platform
                                                      • Our Lab Kubernetes Cluster Details
                                                      • Basic Introduction To Kubernetes Application Components
                                                      • Deploying An Application On Kubernetes Cluster
                                                      • Kubectl To Deploy The App
                                                      • Kubernetes UI Shows Our Running Application
                                                      • Application Details
                                                      • Your Running Pods
                                                      • Connect To Your Application Web Page
                                                        • Container Orchestration With Docker Machine Using Rancher on Photon Platform
                                                          • Login To Photon ControllerCLI VM
                                                          • Deploy Rancher Server
                                                          • Clean Up Rancher Host
                                                          • Connect To Rancher UI
                                                          • Add Rancher Host
                                                          • Paste In The Docker Run Command To Start Rancher Agent
                                                          • View the Agent Container
                                                          • Verify New Host Has Been Added
                                                          • Deploy Nginx Webserver
                                                          • Configure Container Info
                                                          • Container Information
                                                          • Open Your Webserver
                                                          • Rancher Catalogs
                                                            • Conclusion
                                                            • Conclusion