user guide - huawei cloud · 2020-04-14 · 10.2 viewing logs in cts ... click buy enhanced load...

96
Cloud Container Instance User Guide Date 2020-04-14

Upload: others

Post on 22-Apr-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Cloud Container Instance

User Guide

Date 2020-04-14

Page 2: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Contents

1 Permissions Management..................................................................................................... 11.1 Creating a User and Granting CCI Permissions.............................................................................................................11.2 CCI Custom Policies................................................................................................................................................................ 2

2 Environment Configuration.................................................................................................. 4

3 Namespace................................................................................................................................ 6

4 Workload.................................................................................................................................114.1 Pod............................................................................................................................................................................................. 114.2 Deployment............................................................................................................................................................................ 134.3 Job.............................................................................................................................................................................................. 204.4 Cron Job................................................................................................................................................................................... 234.5 Viewing Resource Usage.................................................................................................................................................... 274.6 Setting Container Startup Command............................................................................................................................ 284.7 Container Lifecycle Hook................................................................................................................................................... 294.8 Health Check.......................................................................................................................................................................... 304.9 Web-Terminal......................................................................................................................................................................... 314.10 Upgrading a Workload..................................................................................................................................................... 324.11 Scaling a Workload............................................................................................................................................................ 35

5 Workload Network Access.................................................................................................. 405.1 Network Access Overview................................................................................................................................................. 405.2 Private Network Access...................................................................................................................................................... 415.3 Public Network Access........................................................................................................................................................ 465.4 Accessing Public Networks from a Container............................................................................................................. 54

6 Storage Management...........................................................................................................596.1 Overview.................................................................................................................................................................................. 596.2 EVS Volumes........................................................................................................................................................................... 606.3 OBS Volumes.......................................................................................................................................................................... 636.4 SFS Volumes........................................................................................................................................................................... 646.5 SFS Turbo Volumes............................................................................................................................................................... 67

7 Configuration Management............................................................................................... 707.1 ConfigMaps............................................................................................................................................................................. 707.2 Secrets....................................................................................................................................................................................... 72

Cloud Container InstanceUser Guide Contents

2020-04-14 ii

Page 3: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

7.3 SSL Certificates...................................................................................................................................................................... 75

8 Log Management.................................................................................................................. 78

9 Add-on Management........................................................................................................... 80

10 Auditing.................................................................................................................................8410.1 CCI Operations Supported by CTS................................................................................................................................ 8410.2 Viewing Logs in CTS.......................................................................................................................................................... 88

11 Security Vulnerability Responses.................................................................................... 9111.1 Notice on Fixing Linux Kernel SACK Vulnerabilities............................................................................................... 91

Cloud Container InstanceUser Guide Contents

2020-04-14 iii

Page 4: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

1 Permissions Management

1.1 Creating a User and Granting CCI PermissionsThis chapter describes how to use IAM to implement fine-grained permissionscontrol for your Cloud Container Instance (CCI) resources. With IAM, you can:

● Create IAM users for employees based on your enterprise's organizationalstructure. Each IAM user will have their own security credentials for accessingCCI resources.

● Grant only the permissions required for users to perform a specific task.● Entrust a HUAWEI CLOUD account or cloud service to perform efficient O&M

on your CCI resources.

If your HUAWEI CLOUD account does not require individual IAM users, skip thischapter.

This section describes the procedure for granting permissions (see Figure 1-1).

PrerequisitesLearn about the permissions (see Permissions Management) supported by CCIand choose policies or roles according to your requirements. For the system-defined policies of other services, see Permissions Policies.

Cloud Container InstanceUser Guide 1 Permissions Management

2020-04-14 1

Page 5: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Process Flow

Figure 1-1 Process of granting CCI permissions

1. Create a user group and assign permissions to it.Create a user group on the IAM console, and attach the CCI ReadOnlyAccesspolicy to the group.

2. Create an IAM user.Create a user on the IAM console and add the user to the group created in 1.

3. Log in and verify permissions.Log in to the CCI console by using the user created in 2, and verify that theuser only has read permissions for CCI.– Choose Service List > Cloud Container Instance. In the navigation pane

on the left, choose Workloads > Deployments. On the page displayed,click Create Deployment. If a message appears indicating that you haveinsufficient permissions to perform the operation, the CCIReadOnlyAccess policy has already taken effect.

– Choose any other service in Service List. If a message appears indicatingthat you have insufficient permissions to access the service, the CCIReadOnlyAccess policy has already taken effect.

1.2 CCI Custom PoliciesCustom policies can be created as a supplement to the system-defined policies ofCCI. For the actions that can be added to custom policies, see PermissionsPolicies and Supported Actions.

You can create custom policies in either of the following two ways:

● Visual editor: Select cloud services, actions, resources, and request conditionswithout the need to know policy syntax.

Cloud Container InstanceUser Guide 1 Permissions Management

2020-04-14 2

Page 6: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● JSON: Edit JSON policies from scratch or based on an existing policy.

For details, see Creating a Custom Policy. The following section containsexamples of common CCI custom policies.

Example Custom Policies● Example 1: Updating a namespace

{ "Version": "1.1", "Statement": [ { "Effect": "Allow", "Action": [ "cci:namespace:update" ] } ]}

● Example 2: Denying namespace deletionA policy with only "Deny" permissions must be used in conjunction with otherpolicies to take effect. If the permissions assigned to a user contain both"Allow" and "Deny", the "Deny" permissions take precedence over the "Allow"permissions.The following method can be used if you need to assign permissions of theCCI FullAccess policy to a user but you want to prevent the user fromdeleting namespaces (cci:namespace:delete). Create a custom policy fordenying namespace deletion, and attach both policies to the group to whichthe user belongs. Then, the user can perform all operations on CCI exceptdeleting namespaces. The following is an example of a deny policy:{ "Version": "1.1", "Statement": [ { "Action": [ "cci:namespace:delete" ], "Effect": "Deny" } ]}

● Example 3: Defining permissions for multiple services in a policyA custom policy can contain the actions of multiple services that are of theglobal or project-level type. The following is an example policy containingactions of multiple services:{ "Version": "1.1", "Statement": [ { "Action": [ "ecs:cloudServers:resize", "ecs:cloudServers:delete", "ecs:cloudServers:delete", "ims:images:list", "ims:serverImages:create" ], "Effect": "Allow" } ]}

Cloud Container InstanceUser Guide 1 Permissions Management

2020-04-14 3

Page 7: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

2 Environment Configuration

Logging In to the CCI ConsoleLog in to the CCI console and grant CCI the permission to access other cloudservices.

Step 1 Log in to the management console.

Step 2 Click in the upper left corner and select a region.

CCI is now available only in regions CN North-Beijing1, CN North-Beijing 4, andCN East-Shanghai1.

Step 3 In the All Services area, choose Computing > Cloud Container Instance.

The CCI console is displayed.

Step 4 If this is the first time you log in to the CCI console, click Agree to grant CCI thepermission to access other cloud services.

When the permission is successfully granted, an agency named cci_admin_trust iscreated. You can view the agency on the IAM console.

----End

(Optional) Uploading ImagesHUAWEI CLOUD provides the Software Repository for Container (SWR) service foryou to upload Docker images to the image repository. You can easily import theseimages when creating workloads on CCI. For details about how to upload images,see Uploading an Image Through a Docker Client.

Cloud Container InstanceUser Guide 2 Environment Configuration

2020-04-14 4

Page 8: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

NO TICE

After Enterprise Management is enabled, if an IAM user needs to use privateimages under the account, log in to the CCI console using the account, chooseImage Repository, and grant the required permission to the user on the SWRconsole.

You can use either of the following methods to grant permission to an IAM user:

● On the details page of an image, click the Permission Management tab, clickAdd Permission, and then grant the read, write, or manage permission to theuser. For details, see Granting Permissions of a Specific Image.

● On the details page of an organization, click the Users tab, click AddPermission, and then grant the read, write, or manage permission to the user.For details, see Granting Permissions of an Organization.

(Optional) Creating a Load Balancer

A load balancer allows your workloads to be accessed from external networks. Fordetails about how to create a load balancer, choose Help Center > Elastic LoadBalance > Getting Started.

Step 1 Log in to the management console.

Step 2 Choose Service List > Network > Elastic Load Balance.

Step 3 On the Network console, choose Elastic Load Balance > Load Balancers, andclick Buy Enhanced Load Balancer.

Specify the required parameters to create a load balancer.

Load balancers can be classified as public and private network load balancers according totheir network type. Specify the Network Type parameter as Public network or Privatenetwork.

----End

(Optional) Preparing SSL Certificates

CCI allows workloads to be accessed through HTTPS. You can specify your ownSSL certificate when creating a workload.

SSL certificates are divided into authoritative and self-signed certificates.Authoritative certificates are issued by certificate authorities (CAs). You canpurchase authoritative certificates from third-party agents. Websites that useauthoritative certificates are generally trusted. Self-signed certificates are issuedby users themselves, usually using OpenSSL. Self-signed certificates are generallyuntrusted. The browser will display an alarm message when you access a websitethat uses a self-signed certificate, but you can continue the access by ignoring thealarm.

For details about SSL certificates, see 7.3 SSL Certificates.

Cloud Container InstanceUser Guide 2 Environment Configuration

2020-04-14 5

Page 9: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

3 Namespace

Namespaces are a way to divide cluster resources among multiple users.Namespaces are suited for scenarios where multiple users spread across multipleteams or projects.

Currently, CCI provides general-computing and GPU-accelerated resources. Select aresource type when creating a namespace, so that workload containers will run onthis type of clusters.

● General-computing: Container instances (pods) with CPU resources can becreated, which are ideal for general computing scenarios.

● GPU-accelerated: Container instances (pods) with GPU resources can becreated, which are ideal for scenarios such as deep learning, scientificcomputing, and video processing.

Currently, GPU-accelerated resources are available only in region CN North-Beijing 1 andCN North-Beijing 4.

Relationship Between Namespaces and NetworksA namespace corresponds to a subnet in a VPC, as shown in Figure 3-1. When anamespace is created, it will be associated with an existing VPC or a newly createdVPC, and a subnet will be created under the VPC. Containers and other resourcescreated under this namespace will be in the corresponding VPC and subnet.

If you want to run resources of multiple services in the same VPC, you need toconsider the network planning, such as subnet CIDR block division and IP addressplanning.

Cloud Container InstanceUser Guide 3 Namespace

2020-04-14 6

Page 10: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 3-1 Relationship between namespaces and VPC subnets

Application Scenarios

Namespaces can implement partial environment isolation. If you have a largenumber of projects and personnel, you can create different namespaces based onproject attributes, such as production, test, and development.

Creating a Namespace

Step 1 Log in to the CCI console. In the navigation pane, choose Namespaces.

Step 2 On the page displayed on the right, click Create for the target namespace type.

Step 3 Enter a name for the namespace.

The namespace name must be globally unique in CCI.

Step 4 Select an enterprise project. In CCI, each namespace belongs to one enterpriseproject, but an enterprise project can have multiple namespaces.

● Skip this step if the Enterprise Management service is not enabled. To enable theservice, see Enabling the Enterprise Project Function or Enterprise Multi-AccountFunction. For the precautions for IAM users, see (Optional) Uploading Images.

● After you specify an enterprise project, both the namespace and the network andstorage resources automatically created for the namespace belong to the enterpriseproject. These resources should be migrated together with the namespace. For example,when migrating a namespace from project 1 to project 2, also migrate the network andstorage resources associated with the namespace to project 2. Otherwise, the workloadsin this namespace may not run properly.

Step 5 Set on-demand scaling.

If on-demand scaling is enabled, CCI on-demand instances will be automaticallycreated when dedicated resources are exhausted.

Currently, on-demand scaling is available only in region CN North-Beijing4.

Step 6 Configure a VPC.

Cloud Container InstanceUser Guide 3 Namespace

2020-04-14 7

Page 11: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

You can use an existing VPC or create a VPC. If you create a VPC, it isrecommended that the VPC CIDR block be set to 10.0.0.0/8–24, 172.16.0.0/12–24,or 192.168.0.0/16–24.

NO TICE

The VPC CIDR block and subnet CIDR block cannot be set to 10.247.0.0/16,because this CIDR block is reserved by CCI for containerized workloads. If you usethis CIDR block, IP address conflicts may occur, which may result in workloadcreation failures or service unavailability. If you do not need to access podsthrough workloads, you can allocate this CIDR block to a VPC.

After the namespace is created, you can view VPC and subnet information bychoosing Network Management > Networks.

Step 7 Configure a subnet CIDR block.

Ensure that there are sufficient available IP addresses. If the number of IPaddresses are insufficient, workloads will fail to be created.

Figure 3-2 Configuring a subnet

Step 8 Configure an InfiniBand (IB) network.

InfiniBand is a computer network communication standard used for high-performance computing. It provides high throughput and low latency. IB networkscan effectively improve the access speed between containers.

IB and VPC networks are independent of each other. The IB network is a high-speed access channel between containers, and the VPC network is used for otherpurposes, including external access.

When creating an IB network, you can enable IP over IB (IPoIB) and set a CIDRblock for the IB network.

Cloud Container InstanceUser Guide 3 Namespace

2020-04-14 8

Page 12: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● Only GPU-accelerated namespaces support IB network settings.

● The CIDR block of the IB network cannot conflict with the VPC CIDR block.

Figure 3-3 Configuring an IB network

Step 9 Configure advanced settings.

Currently, advanced settings are available only in region CN North-Beijing4.

Each namespace provides an IP resource pool. You can customize the pool size toreduce the duration for applying for IP addresses and improve the workloadcreation efficiency.

For example, 200 pods are running every day. During peak traffic hours, the IPresource pool instantly scales out to provide 500 IP addresses. After a specifiedinterval (for example, 23 hours), the IP addresses beyond the pool size (that is,500 - 200 = 300 IP addresses) will be reclaimed.

● Warmed-up IP Pool for Namespace: specifies the size of the IP pool warmedup for a namespace. The IP pool can accelerate workload creation.

● Warmed-up IP Reclaim Interval (h): specifies the interval at which idle IPaddresses in the IP resource pool are reclaimed.

● Warmed-up IP Pool for Node: specifies the size of the IP pool warmed up fora node that runs the dedicated container instance. The IP pool can accelerateworkload creation.

This parameter is displayed only to VIP users.

● Container Network: When the container starts, network connection may beunavailable. Enable this option if the container needs to connect to thenetwork immediately after it starts.

Step 10 Click Create.

Cloud Container InstanceUser Guide 3 Namespace

2020-04-14 9

Page 13: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

After the creation is complete, you can view the VPC and subnet information onthe namespace details page.

----End

Deleting a Namespace

NO TICE

Deleting a namespace will remove all data resources related to the namespace,including workloads, ConfigMaps, secrets, and SSL certificates.

Step 1 Log in to the CCI console. In the navigation pane, choose Namespaces. On thepage displayed on the right, click the namespace to be deleted.

Step 2 In the upper right corner, click Delete. In the dialog box that is displayed, enterDELETE and click Yes.

To delete a VPC or subnet, go to the VPC console.

----End

Creating a Namespace Using kubectlFor details, see Namespace and Network.

Cloud Container InstanceUser Guide 3 Namespace

2020-04-14 10

Page 14: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

4 Workload

4.1 Pod

What Is Pod?A pod is the smallest and simplest unit in the Kubernetes object model that youcreate or deploy. A pod encapsulates one or more containers, storage resources, aunique network IP address, and options that govern how the container(s) shouldrun.

Pods can be used in either of the following ways:

● One container runs in one pod. This is the most common usage of pods inKubernetes. You can view the pod as a single encapsulated container, butKubernetes directly manages pods instead of containers.

● Multiple containers that need to be coupled and share resources run in a pod.In this scenario, an application contains a main container and several sidecarcontainers, as shown in Figure 4-1. For example, the main container is a webserver that provides file services from a fixed directory, and the sidecarcontainer periodically downloads files to the directory.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 11

Page 15: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 4-1 Pod

In Kubernetes, pods are rarely created directly. Instead, controllers such asDeployments and jobs, are used to manage pods. Controllers can create andmanage multiple pods, and provide replica management, rolling upgrade, andself-healing capabilities. A controller generally uses a pod template to createcorresponding pods.

Viewing PodsSometimes you may create pods by calling the API or running the kubectlcommand. As these pods are not created under a workload or job, they cannot beconveniently managed on the console. To solve this problem, CCI provides podmanagement, which allows you to filter pods by source.

Figure 4-2 Selecting a pod source

You can view details about all pods, including basic information, containercomposition, monitoring data, and events. You can use the web-terminal to accesspods. In addition, you can delete pods and view pod logs.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 12

Page 16: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 4-3 Pod details

Creating a Pod Using kubectlFor details, see Pod.

4.2 DeploymentA Deployment is a service-oriented encapsulation of pods. A Deployment maycontain one or more pods. Each pod has the same role; therefore, the systemautomatically distributes requests to the pods in the Deployment. All pods in aDeployment share the same volume.

As described in 4.1 Pod, a pod is the smallest and simplest unit in the Kubernetesobject model that you create or deploy. It is designed to be an ephemeral, one-offentity. A pod can be evicted when node resources are insufficient and disappearsalong with a cluster node failure. Kubernetes provides controllers to manage pods.Controllers can create and manage pods, and provide replica management, rollingupgrade, and self-healing capabilities. The most commonly used controller isDeployment.

A Deployment can contain one or more pod replicas. Each pod replica has thesame role. Therefore, the system automatically distributes requests to multiplepod replicas of a Deployment.

A Deployment integrates a lot of functions, including online deployment, rollingupgrade, replica creation, and restoration of online jobs. To some extent,Deployments can be used to realize unattended rollout, which greatly reducescommunication difficulties and operation risks in the rollout process.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 13

Page 17: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 4-4 Deployment

Creating a Deployment

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads >Deployments. On the page displayed on the right, click Create Deployment.

Step 2 Configure basic information.

● Workload Name

Enter 1 to 63 characters starting and ending with a letter or digit. Onlylowercase letters, digits, hyphens (-), and periods (.) are allowed. Consecutiveperiods are not allowed, and a period cannot follow or be followed by ahyphen.

● Namespace

Select a namespace. If no namespaces are available, create one by followingthe procedure provided in 3 Namespace.

● Description

Enter a description, which cannot exceed 250 characters.

● Pods

Specify the number of pods. A workload can have one or more pods. Eachpod consists of one or more containers with the same specifications.Configuring multiple pods for a workload ensures high reliability. If one pod isfaulty, the workload can still run properly.

● Pod Specifications

You can select GPU-accelerated and allocate GPUs to the workload only ifthe namespace is of the GPU-accelerated type.

Currently, three types of pods are provided, including general-computing(used in general-computing namespaces), RDMA-accelerated, and GPU-accelerated (used in GPU-accelerated namespaces).

GPU-accelerated pods support the following GPUs: NVIDIA Tesla V100 32GB,NVIDIA Tesla V100 16GB, and NVIDIA Tesla P4 8GB.

– Specifications of NVIDIA Tesla V100 32GB are as follows:

▪ NVIDIA Tesla V100 32GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla V100 32GB x 2, 8 CPU cores, 64 GB memory

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 14

Page 18: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

▪ NVIDIA Tesla V100 32GB x 4, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla V100 32GB x 8, 32 CPU cores, 256 GB memory

– Specifications of NVIDIA Tesla V100 16GB are as follows:

▪ NVIDIA Tesla V100 16GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla V100 16GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla V100 16GB x 4, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla V100 16GB x 8, 32 CPU cores, 256 GB memory

– Specifications of NVIDIA Tesla P4 8GB are as follows:

▪ NVIDIA Tesla P4 8GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla P4 8GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla P4 8GB x 3, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla P4 8GB x 4, 32 CPU cores, 256 GB memory

CCI supports NVIDIA GPU drivers 396.26 and 410.104. The CUDA toolkit usedin your application must meet the requirements listed in Table 4-1. For detailsabout the compatibility between CUDA toolkits and drivers, see CUDACompatibility at https://www.nvidia.com.

– The NVIDIA System Management Interface (nvidia-smi) is a command line utility.For more details, see NVIDIA System Management Interface.

– nvidia-smi is not provided by CCI. You can package nvidia-smi into an image anduse this utility to monitor the GPU usage. Before using nvidia-smi, set theLD_LIBRARY_PATH field. For details, see Why an Error Is Reported When a GPU-Related Operation Is Performed on the Container Entered by Using exec?.

– Region CN North-Beijing4 supports only NVIDIA Tesla V100 32 GB GPUs.

Table 4-1 Compatibility between NVIDIA GPU drivers and CUDA toolkits

NVIDIAGPUDriverVersion

CUDA Toolkit Version

396.26 CUDA 9.2 (9.2.88) or earlier

410.104 CUDA 10.0 (10.0.130) or earlier

If the pod type is not GPU-accelerated, the container specifications you selectmust meet the following requirements:– The total number of CPU cores in a pod can be a value in the range of

0.25-32, 48, or 64. The total number of CPU cores in a container is aninteger multiple of 0.25.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 15

Page 19: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– The total memory size (in GB) of a pod is an integer from 1 to 512.– The ratio of CPU cores to memory size in a pod ranges from 1:2 to 1:8.– A pod can have a maximum of five containers. The minimum

configuration of a container is 0.25 cores and 0.2 GB. The maximumconfiguration of a container is the same as that of a pod.

● Configure ContainerA pod generally contains only one container, and it can also contain multiplecontainers created from different images. If your application needs to run onmultiple containers in a pod, click Add Container and then select an image.

NO TICE

If different containers in a pod listen to the same port, a port conflict willoccur and the pod may fail to start. For example, if an Nginx container (whichlistens to port 80) has been added to a pod, a port conflict will occur whenanother HTTP container in the pod tries to listen to port 80.

– My Images: images you have uploaded to SWR

If you are an IAM user, you need to set permissions by following the procedureprovided in (Optional) Uploading Images before using the private images ofthe account.

– Official Docker Images: public images on Docker Hub– Shared Images: images shared by others through SWRAfter the image is selected, select the image version and set the containername and CPU and memory specifications (the minimum configuration of asingle container is 0.25 cores and 0.2 GB). You can also choose to enable thecollection of standard output files. If file collection is enabled, ApplicationOperations Management (AOM) bills you for the log storage space that youuse.

AOM provides each account 500 MB log storage space for free each month. AOM billsextra space on a pay-per-use basis. For details, see Pricing Details.

In a GPU-accelerated pod (available only in GPU-accelerated namespaces),only one container can use GPUs. If there are multiple containers in your pod,you can specify the container to use GPUs by enabling the GPU option.Similarly, only one container in a GPU-accelerated pod can use IB networks. Ifthere are multiple containers in your pod, you can specify the container to useIB networks by enabling the IB network option. If no IB network is available inyour GPU-accelerated namespace, click Create IB Network to create one. Fordetails about the IB network, see Step 8.You can also configure the following advanced settings for a container:– Storage: You can mount persistent volumes into containers to persist

data files. Currently, Elastic Volume Service (EVS), Scalable File Service(SFS), and SFS Turbo volumes are supported. Click the EVS Volumes, SFSVolumes, or SFS Turbo Volumes tab, and set the volume name, capacity,container path, and disk type. After the workload is created, you can

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 16

Page 20: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

manage storage volumes. For details, see 6.2 EVS Volumes, 6.4 SFSVolumes, or 6.5 SFS Turbo Volumes.

– Log Collection: Application logs will be collected to the path you set. Youneed to configure policies to prevent logs from being over-sized. ClickAdd Log Storage, enter a container path for storing logs, and set theupper limit of log file size. After the workload is created, you can viewlogs on the AOM console. For details, see 8 Log Management.

– Environment Variables: You can manually set environment variables oradd variable references. Environment variables add flexibility to workloadconfiguration. The environment variables for which you have assignedvalues during container creation will take effect when the container isrunning. This saves you the trouble of rebuilding the container image.

To manually set variables, enter the variable name and value.

To reference variables, set the variable name, reference type, andreferenced value for each variable. The following variables can bereferenced: PodIP (pod IP address), PodName (pod name), and Secret.For details about how to create a secret reference, see 7.2 Secrets.

– Health Check: Container health can be checked regularly duringcontainer running. For details about how to configure health checks, see4.8 Health Check.

– Lifecycle: Lifecycle scripts specify actions that applications take when alifecycle event occurs. For details about how to configure the scripts, see4.7 Container Lifecycle Hook.

– Startup Commands: You can set the commands to be executedimmediately after the container is started. Startup commands correspondto Docker's ENTRYPOINT startup instructions. For details, see 4.6 SettingContainer Startup Command.

– Configuration Management: You can mount ConfigMaps and secrets toa container. For details about how to create ConfigMaps and secrets, see7.1 ConfigMaps and 7.2 Secrets.

Step 3 Click Next to configure access information.

Three options are available:

● Do not use: No entry is provided to allow access from other workloads. Thismode is suited for scenarios where custom service discovery is used or whereaccess entry is not required.

● Intranet access: A domain name or internal domain name/virtual IP addressis configured for the current workload so that this workload can provideservices for other workloads in an internal network. Two internal networkaccess modes are available: Service and ELB. For details about the internalnetwork access, see 5.2 Private Network Access.

● Internet access: An entry is provided to allow access from the Internet. HTTP,HTTPS, TCP, and UDP are supported. For details about the public networkaccess, see 5.3 Public Network Access.

Step 4 Click Next and configure advanced settings.

● Upgrade Policy: Rolling upgrade and In-place upgrade are available.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 17

Page 21: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– Rolling upgrade: Gradually replaces an old pod with a new pod. Duringthe upgrade, service traffic is evenly distributed to the old and new podsto ensure service continuity.Maximum Number of Unavailable Pods: Maximum number ofunavailable pods allowed in a rolling upgrade. If the number is equal tothe total number of pods, services may be interrupted. Minimum numberof alive pods = Total pods – Maximum number of unavailable pods

– In-place upgrade: Deletes an old pod and then creates a new one.Services will be interrupted during the upgrade.

● APM Settings: Application Performance Management (APM) helps youquickly locate workload faults and analyze performance bottlenecks.Currently, APM provides tracing and topology display for Java workloads. Ifyou want to monitor the status of a Java workload, select Java probe andenter a monitoring group name.

The probe provides tracing, topology display, SQL analysis, and stack tracing for Javaworkloads. However, running the probe will consume a small amount of CPU andmemory resources.

a. Enter a monitoring group name, for example, testapp. If one or moremonitoring groups are available, you can select one from the drop-downlist.

b. Select a probe version. The default version is latest. For details aboutprobe versions, click Version Description.

Figure 4-5 Configuring APM settings

c. Select a probe upgrade policy. By default, Automatic upgrade uponrestart is selected.The probe upgrade policy determines how the probe image is obtained.Two options are available: Automatic upgrade upon restart andManual upgrade.

▪ Automatic upgrade upon restart: The system downloads the probeimage each time the pod is restarted.

▪ Manual upgrade: A local probe image is used if available. Thesystem downloads the probe image only when a local image isunavailable.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 18

Page 22: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Step 5 Click Next. After you confirm the configuration, click Submit. Then click Back toDeployment List.

In the workload list, if the workload status is Running, the workload is createdsuccessfully. You can click the workload name to view workload details and pressF5 to view the real-time workload status.

If you want to access the workload, click the Access Settings tab to obtain theaccess address.

----End

Deleting a Pod

After the workload is created, you can manually delete pods. As pods arecontrolled by a controller, a new pod will be created immediately after you deletean old pod. Manual pod deletion is useful when an upgrade fails halfway or whenservice processes need to be restarted.

In the pod list, click Delete for the target pod, as shown in Figure 4-6.

Figure 4-6 Deleting a pod

A new pod is created immediately after you delete the old pod, as shown inFigure 4-7.

Figure 4-7 Result of deleting a pod

Creating a Deployment Using kubectl

For details, see Deployment.

Troubleshooting a Failure to Pull the Image

If the workload details page shows an event indicating that the image fails to bepulled, locate the fault by following the procedure provided in What Can I Do Ifan Event Indicating That the Image Failed to Be Pulled Occurs.

Troubleshooting a Failure to Restart the Container

If the workload details page shows an event indicating that the container fails tobe restarted, locate the fault by following the procedure provided in What Can IDo If an Event Indicating That the Container Failed to Be Restarted Occurs.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 19

Page 23: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

4.3 JobA job is responsible for batch processing of short lived one-off tasks, that is, tasksthat are executed only once. It ensures that one or more pods are successfullycompleted.

A job is a resource object that Kubernetes uses to control batch tasks. Batch jobsare different from long-term servo jobs (such as Deployments). The former can bestarted and terminated at specific time, while the latter runs unceasingly unless itis terminated. The pods managed by a job will be automatically removed aftersuccessfully completing tasks based on user configurations.

This run-to-completion feature of jobs is especially suitable for one-off tasks, suchas continuous integration (CI). It works with the per-second billing of CCI toimplement pay-per-use in real sense.

Creating a Job

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads > Jobs. Onthe page displayed on the right, click Create Job.

Step 2 Configure basic information.● Job Name

Enter 1 to 63 characters starting and ending with a letter or digit. Onlylowercase letters, digits, hyphens (-), and periods (.) are allowed. Consecutiveperiods are not allowed, and a period cannot follow or be followed by ahyphen.

● NamespaceSelect a namespace. If no namespaces are available, create one by followingthe procedure provided in 3 Namespace.

● DescriptionEnter a description, which cannot exceed 250 characters.

● Pod SpecificationsYou can select GPU-accelerated and allocate GPUs to the workload only ifthe namespace is of the GPU-accelerated type.Currently, three types of pods are provided, including general-computing(used in general-computing namespaces), RDMA-accelerated, and GPU-accelerated (used in GPU-accelerated namespaces).GPU-accelerated pods support the following GPUs: NVIDIA Tesla V100 32GB,NVIDIA Tesla V100 16GB, and NVIDIA Tesla P4 8GB.– Specifications of NVIDIA Tesla V100 32GB are as follows:

▪ NVIDIA Tesla V100 32GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla V100 32GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla V100 32GB x 4, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla V100 32GB x 8, 32 CPU cores, 256 GB memory

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 20

Page 24: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– Specifications of NVIDIA Tesla V100 16GB are as follows:

▪ NVIDIA Tesla V100 16GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla V100 16GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla V100 16GB x 4, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla V100 16GB x 8, 32 CPU cores, 256 GB memory

– Specifications of NVIDIA Tesla P4 8GB are as follows:

▪ NVIDIA Tesla P4 8GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla P4 8GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla P4 8GB x 3, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla P4 8GB x 4, 32 CPU cores, 256 GB memory

Region CN North-Beijing4 supports only NVIDIA Tesla V100 32 GB GPUs.

CCI supports NVIDIA GPU drivers 396.26 and 410.104. The CUDA toolkit usedin your application must meet the requirements listed in Table 4-2. For detailsabout the compatibility between CUDA toolkits and drivers, see CUDACompatibility at https://www.nvidia.com.

Table 4-2 Compatibility between NVIDIA GPU drivers and CUDA toolkits

NVIDIAGPUDriverVersion

CUDA Toolkit Version

396.26 CUDA 9.2 (9.2.88) or earlier

410.104 CUDA 10.0 (10.0.130) or earlier

If the pod type is not GPU-accelerated, the container specifications you selectmust meet the following requirements:– The total number of CPU cores in a pod can be a value in the range of

0.25-32, 48, or 64. The total number of CPU cores in a container is aninteger multiple of 0.25.

– The total memory size (in GB) of a pod is an integer from 1 to 512.– The ratio of CPU cores to memory size in a pod ranges from 1:2 to 1:8.– A pod can have a maximum of five containers. The minimum

configuration of a container is 0.25 cores and 0.2 GB. The maximumconfiguration of a container is the same as that of a pod.

● Configure ContainerA pod generally contains only one container, and it can also contain multiplecontainers created from different images. If your application needs to run onmultiple containers in a pod, click Add Container and then select an image.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 21

Page 25: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

NO TICE

If different containers in a pod listen to the same port, a port conflict willoccur and the pod may fail to start. For example, if an Nginx container (whichlistens to port 80) has been added to a pod, a port conflict will occur whenanother HTTP container in the pod tries to listen to port 80.

– My Images: images you have uploaded to SWR– Official Docker Images: public images on Docker Hub– Shared Images: images shared by others through SWRAfter the image is selected, select the image version and set the containername and CPU and memory specifications (the minimum configuration of asingle container is 0.25 cores and 0.2 GB). You can also choose to enable thecollection of standard output files. If file collection is enabled, ApplicationOperations Management (AOM) bills you for the log storage space that youuse.

AOM provides each account 500 MB log storage space for free each month. AOM billsextra space on a pay-per-use basis. For details, see Pricing Details.

In a GPU-accelerated pod (available only in GPU-accelerated namespaces),only one container can use GPUs. If there are multiple containers in your pod,you can specify the container to use GPUs by enabling the GPU option.Similarly, only one container in a GPU-accelerated pod can use IB networks. Ifthere are multiple containers in your pod, you can specify the container to useIB networks by enabling the IB network option. If no IB network is available inyour GPU-accelerated namespace, click Create IB Network to create one. Fordetails about the IB network, see Step 8.You can also configure the following advanced settings for a container:– Storage: You can mount persistent volumes into containers to persist

data files. Currently, EVS, Object Storage Service (OBS), SFS, and SFSTurbo volumes are supported. Click the EVS Volumes, OBS Volumes, SFSVolumes, or SFS Turbo Volumes tab, and set the volume name, capacity,container path, and disk type. After the job is created, you can managestorage volumes. For details, see 6.2 EVS Volumes, 6.3 OBS Volumes, 6.4SFS Volumes, or 6.5 SFS Turbo Volumes.

– Log Collection: Application logs will be collected to the path you set. Youneed to configure policies to prevent logs from being over-sized. ClickAdd Log Storage, enter a container path for storing logs, and set theupper limit of log file size. After the workload is created, you can viewlogs on the AOM console. For details, see 8 Log Management.

– Environment Variables: You can manually set environment variables oradd variable references. Environment variables add flexibility to workloadconfiguration. The environment variables for which you have assignedvalues during container creation will take effect when the container isrunning. This saves you the trouble of rebuilding the container image.To manually set variables, enter the variable name and value.To reference variables, set the variable name, reference type, andreferenced value for each variable. The following variables can be

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 22

Page 26: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

referenced: PodIP (pod IP address), PodName (pod name), and Secret.For details about how to create a secret reference, see 7.2 Secrets.

– Liveness Probe: You can configure a liveness probe for customized healthchecking of the container. If the container fails the check, the CCI willstop the container and determine whether to restart the container basedon the restart policy. For details about how to configure a liveness probe,see 4.8 Health Check.

– Lifecycle: Lifecycle scripts specify actions that applications take when alifecycle event occurs. For details about how to configure the scripts, see4.7 Container Lifecycle Hook.

– Startup Commands: You can set the commands to be executedimmediately after the container is started. Startup commands correspondto Docker's ENTRYPOINT startup instructions. For details, see 4.6 SettingContainer Startup Command.

– Configuration Management: You can mount ConfigMaps and secrets toa container. For details about how to create ConfigMaps and secrets, see7.1 ConfigMaps and 7.2 Secrets.

Step 3 Click Next and configure advanced settings.

Jobs can be classified into one-off jobs and custom jobs.

● One-off job: A one-off job creates one pod each time. The job is completedwhen the pod is successfully executed.

● Custom job: You can set the number of executions and the number ofconcurrent executions for a custom job. Completions specifies the number ofpods that need to be successfully executed until the job is completed.Parallelism specifies the maximum number of pods that can run concurrentlyduring the execution of the job. The number of parallel jobs should be lessthan the times executed.

You can set the timeout period for the job. When the job execution durationexceeds the timeout period, the job will be identified as failed, and all pods underthis job will be deleted. If this parameter is left blank, the job will never time out.

Step 4 Click Next. After you confirm the configuration, click Submit. Then click Back toJob List.

If the job status is Running, the job is created successfully. You can click the jobname to view job details and press F5 to view the real-time job status.

----End

Creating a Job Using kubectl

For details, see Creating a Job.

4.4 Cron JobA cron job runs a job on a specified schedule. A cron job object is similar to a lineof a crontab file in Linux.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 23

Page 27: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Creating a Cron Job

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads > Cron Jobs.On the page displayed on the right, click Create Cron Job.

Step 2 Configure basic information.● Job Name

Enter 1 to 63 characters starting and ending with a letter or digit. Onlylowercase letters, digits, hyphens (-), and periods (.) are allowed. Consecutiveperiods are not allowed, and a period cannot follow or be followed by ahyphen.

● NamespaceSelect a namespace. If no namespaces are available, create one by followingthe procedure provided in 3 Namespace.

● DescriptionEnter a description, which cannot exceed 250 characters.

● Pod SpecificationsYou can select GPU-accelerated and allocate GPUs to the workload only ifthe namespace is of the GPU-accelerated type.Currently, three types of pods are provided, including general-computing(used in general-computing namespaces), RDMA-accelerated, and GPU-accelerated (used in GPU-accelerated namespaces).GPU-accelerated pods support the following GPUs: NVIDIA Tesla V100 32GB,NVIDIA Tesla V100 16GB, and NVIDIA Tesla P4 8GB.– Specifications of NVIDIA Tesla V100 32GB are as follows:

▪ NVIDIA Tesla V100 32GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla V100 32GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla V100 32GB x 4, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla V100 32GB x 8, 32 CPU cores, 256 GB memory

– Specifications of NVIDIA Tesla V100 16GB are as follows:

▪ NVIDIA Tesla V100 16GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla V100 16GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla V100 16GB x 4, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla V100 16GB x 8, 32 CPU cores, 256 GB memory

– Specifications of NVIDIA Tesla P4 8GB are as follows:

▪ NVIDIA Tesla P4 8GB x 1, 4 CPU cores, 32 GB memory

▪ NVIDIA Tesla P4 8GB x 2, 8 CPU cores, 64 GB memory

▪ NVIDIA Tesla P4 8GB x 3, 16 CPU cores, 128 GB memory

▪ NVIDIA Tesla P4 8GB x 4, 32 CPU cores, 256 GB memory

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 24

Page 28: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Region CN North-Beijing4 supports only NVIDIA Tesla V100 32 GB GPUs.

CCI supports NVIDIA GPU drivers 396.26 and 410.104. The CUDA toolkit usedin your application must meet the requirements listed in Table 4-3. For detailsabout the compatibility between CUDA toolkits and drivers, see CUDACompatibility at https://www.nvidia.com.

Table 4-3 Compatibility between NVIDIA GPU drivers and CUDA toolkits

NVIDIAGPUDriverVersion

CUDA Toolkit Version

396.26 CUDA 9.2 (9.2.88) or earlier

410.104 CUDA 10.0 (10.0.130) or earlier

If the pod type is not GPU-accelerated, the container specifications you selectmust meet the following requirements:– The total number of CPU cores in a pod can be a value in the range of

0.25-32, 48, or 64. The total number of CPU cores in a container is aninteger multiple of 0.25.

– The total memory size (in GB) of a pod is an integer from 1 to 512.– The ratio of CPU cores to memory size in a pod ranges from 1:2 to 1:8.– A pod can have a maximum of five containers. The minimum

configuration of a container is 0.25 cores and 0.2 GB. The maximumconfiguration of a container is the same as that of a pod.

● Configure ContainerA pod generally contains only one container, and it can also contain multiplecontainers created from different images. If your application needs to run onmultiple containers in a pod, click Add Container and then select an image.

NO TICE

If different containers in a pod listen to the same port, a port conflict willoccur and the pod may fail to start. For example, if an Nginx container (whichlistens to port 80) has been added to a pod, a port conflict will occur whenanother HTTP container in the pod tries to listen to port 80.

– My Images: images you have uploaded to SWR– Official Docker Images: public images on Docker Hub– Shared Images: images shared by others through SWRAfter the image is selected, select the image version and set the containername and CPU and memory specifications (the minimum configuration of asingle container is 0.25 cores and 0.2 GB). You can also choose to enable the

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 25

Page 29: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

collection of standard output files. If file collection is enabled, ApplicationOperations Management (AOM) bills you for the log storage space that youuse.

AOM provides each account 500 MB log storage space for free each month. AOM billsextra space on a pay-per-use basis. For details, see Pricing Details.

In a GPU-accelerated pod (available only in GPU-accelerated namespaces),only one container can use GPUs. If there are multiple containers in your pod,you can specify the container to use GPUs by enabling the GPU option.Similarly, only one container in a GPU-accelerated pod can use IB networks. Ifthere are multiple containers in your pod, you can specify the container to useIB networks by enabling the IB network option. If no IB network is available inyour GPU-accelerated namespace, click Create IB Network to create one. Fordetails about the IB network, see Step 8.You can also configure the following advanced settings for a container:– Storage: You can mount persistent volumes into containers to persist

data files. Currently, SFS volumes are supported. Click the Add SFSVolume tab, and set the volume name, capacity, container path, and disktype. After the cron job is created, you can manage storage volumes. Fordetails, see 6.4 SFS Volumes.

– Log Collection: Application logs will be collected to the path you set. Youneed to configure policies to prevent logs from being over-sized. ClickAdd Log Storage, enter a container path for storing logs, and set theupper limit of log file size. After the workload is created, you can viewlogs on the AOM console. For details, see 8 Log Management.

– Environment Variables: You can manually set environment variables oradd variable references. Environment variables add flexibility to workloadconfiguration. The environment variables for which you have assignedvalues during container creation will take effect when the container isrunning. This saves you the trouble of rebuilding the container image.To manually set variables, enter the variable name and value.To reference variables, set the variable name, reference type, andreferenced value for each variable. The following variables can bereferenced: PodIP (pod IP address), PodName (pod name), and Secret.For details about how to create a secret reference, see 7.2 Secrets.

– Liveness Probe: You can configure a liveness probe for customized healthchecking of the container. If the container fails the check, the CCI willstop the container and determine whether to restart the container basedon the restart policy. For details about how to configure a liveness probe,see 4.8 Health Check.

– Lifecycle: Lifecycle scripts specify actions that applications take when alifecycle event occurs. For details about how to configure the scripts, see4.7 Container Lifecycle Hook.

– Startup Commands: You can set the commands to be executedimmediately after the container is started. Startup commands correspondto Docker's ENTRYPOINT startup instructions. For details, see 4.6 SettingContainer Startup Command.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 26

Page 30: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– Configuration Management: You can mount ConfigMaps and secrets toa container. For details about how to create ConfigMaps and secrets, see7.1 ConfigMaps and 7.2 Secrets.

Step 3 Click Next and configure advanced settings.

● Concurrency Policy

– Forbid: A new job cannot be created before the previous job iscompleted.

– Allow: New jobs can be created continuously.

– Replace: A new job replaces the previous job when it is time to create ajob but the previous job is not completed.

● Timing Rule: Set the schedule based on which the job is executed.

● Job Record: Set the number of records to be retained for successful jobs andfailed jobs.

Step 4 Click Next. After you confirm the configuration, click Submit. Then click Back toCron Job List.

If the job status is Running, the cron job is created successfully. You can click thejob name to view job details and press F5 to view the real-time job status.

----End

Creating a Cron Job Using kubectl

For details, see Creating a Cron Job.

4.5 Viewing Resource UsageAfter you have created a workload, you may want to know the resource usagerates of each pod.

CCI allows you to monitor the CPU or GPU usage and memory usage of each pod.Go to the details page of a Deployment, job, or Cron job, and click Expand for apod in the pod list. On the Monitoring tab page, view the resource usage, asshown in Figure 4-8. You can also view the resource usage of each pod bychoosing Workloads > Pods in the navigation pane.

Figure 4-8 Viewing monitoring data

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 27

Page 31: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

4.6 Setting Container Startup CommandStarting the container is to start the main process. However, some preparationsmust be made before the main process is started. For example, you may configureor initialize MySQL databases before running MySQL servers. You can setENTRYPOINT or CMD in the Dockerfile when creating an image. As shown in thefollowing, the ENTRYPOINT ["top", "-b"] command is set in the Dockerfile. Thiscommand will be executed during container startup.

FROM ubuntuENTRYPOINT ["top", "-b"]

NO TICE

The startup command must be supported by the container image. Otherwise, thecontainer fails to be started.

In CCI, you can also set the container startup command. For example, to add thepreceding command in the Dockerfile, you can click Add and enter the topcommand, and then click Add again and enter -b in the Advanced Settings areawhen creating a workload, as shown in the following figure.

Figure 4-9 Startup command

When Docker runs, only one ENTRYPOINT command is supported. The startupcommand set in CCI will overwrite the ENTRYPOINT and CMD commands set inDockerfile during image creation. The following table lists the rules.

ImageEntrypoint

Image CMD Commandfor Runninga Container

Parameterfor Runninga Container

CommandExecuted

[touch] [/root/test] Not set Not set [touch /root/test]

[touch] [/root/test] [mkdir] Not set [mkdir]

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 28

Page 32: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

ImageEntrypoint

Image CMD Commandfor Runninga Container

Parameterfor Runninga Container

CommandExecuted

[touch] [/root/test] Not set [/opt/test] [touch /opt/test]

[touch] [/root/test] [mkdir] [/opt/test] [mkdir /opt/test]

4.7 Container Lifecycle Hook

Setting a Container Lifecycle Hook

Based on Kubernetes, CCI provides containers with lifecycle hooks. The hooksenable containers to run code triggered by events during their managementlifecycle. For example, if you want a container to perform a certain operationbefore it is stopped, you can register a hook. The following lifecycle hooks areprovided:

● Post-Start Processing: triggered immediately after the container is started

● Pre-Stop Processing: triggered immediately before the container is stopped

Currently, CCI supports only hook handlers of the Exec type, which execute a specificcommand.

During workload creation, expand the Advanced Settings area, and click thePost-Start Processing or Pre-Stop Processing tab in the Lifecycle area.

For example, if you want to run the /postStart.sh all command in the container,configure data on the page as shown in the following figure. The first rowindicates the script name and the second row indicates a parameter setting.

Figure 4-10 Command settings

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 29

Page 33: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Setting a Container Lifecycle Hook Through kubectlFor details, see Lifecycle Management.

4.8 Health CheckContainer health can be checked regularly during container running.

CCI provides two health check methods based on Kubernetes:

● Liveness probe: checks whether a containerized application is alive. Theliveness probe is similar to the ps command for checking whether a process isrunning. If the containerized application fails the check, the container will berestarted. If the containerized application passes the check, no operation willbe performed.

● Readiness probe: checks whether a containerized application is ready tohandle requests. An application may take a long time to start up and provideservices, for example, because it needs to load disk data or wait for thestartup of an external module. In this case, application processes are running,but the application is not ready to provide services. This is where thereadiness probe comes in.

Health Check Modes● HTTP Request Mode

The probe sends an HTTP GET request to the container. If the probe receives a2xx or 3xx status code, the container is healthy.

● Command Line ScriptThe probe runs a command in the container and checks the exit status code.If the exit status code is 0, the probe is healthy.For example, if you want to run the cat /tmp/healthy command to checkwhether the /tmp/healthy directory exists, configure data on the page asshown in the following figure.

Figure 4-11 Command setting

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 30

Page 34: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Common Parameter Description

Table 4-4 Health check parameters

Parameter Description

Time Window (s) Delay time (unit: second). For example, if this parameteris set to 10, the probe starts 10 seconds after thecontainer is started.

Timeout Period (s) Timeout period (unit: second). For example, if thisparameter is set to 10, the container must return aresponse in 10 seconds. Otherwise, the probe is countedas failed. If this parameter is set to 0 or left unspecified,the default value (1 second) is used.

Setting a Health Check Using kubectl● For details about how to set the liveness probe, see Liveness Probe.● For details about how to set the readiness probe, see Readiness Probe.

4.9 Web-TerminalThe web-terminal provides the container connection function to help you quicklydebug the container.

Constraints and Restrictions● The web-terminal logs in to the container by using sh shell by default.

Therefore, the container must support sh shell.● Only running containers can be logged in to by using the web-terminal.● You need to enter exit in the web-terminal during exit; otherwise, the sh

process will remain.

Connecting to the Container by Using the Web-terminal

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads >Deployments. On the page displayed on the right, click the workload to beaccessed.

Step 2 In the Pod List area of the workload details page, click the arrow icon at the leftof the pod and then click the CLI tab.

When # is displayed, you have logged in to the container.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 31

Page 35: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 4-12 Container CLI

----End

4.10 Upgrading a WorkloadA workload can be updated and upgraded after being created. Currently, rollingupgrade and in-place upgrade are supported.● Rolling upgrade: Gradually replaces an old pod with a new pod. During the

upgrade, service traffic is evenly distributed to the old and new pods to ensureservice continuity.

● In-place upgrade: Deletes an old pod and then creates a new one. Serviceswill be interrupted during the upgrade.

Upgrading a Workload

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads >Deployments. On the page displayed on the right, click the name of the workloadto be upgraded. Then, click Upgrade in the upper right corner of the workloaddetails page.

Step 2 Modify pod specifications.

You can select GPU-accelerated and allocate GPUs to the workload only if thenamespace is of the GPU-accelerated type.

Currently, three types of pods are provided, including general-computing (used ingeneral-computing namespaces), RDMA-accelerated, and GPU-accelerated (usedin GPU-accelerated namespaces).

GPU-accelerated pods support the following GPUs: NVIDIA Tesla V100 32GB,NVIDIA Tesla V100 16GB, and NVIDIA Tesla P4 8GB.● Specifications of NVIDIA Tesla V100 32GB are as follows:

– NVIDIA Tesla V100 32GB x 1, 4 CPU cores, 32 GB memory– NVIDIA Tesla V100 32GB x 2, 8 CPU cores, 64 GB memory– NVIDIA Tesla V100 32GB x 4, 16 CPU cores, 128 GB memory– NVIDIA Tesla V100 32GB x 8, 32 CPU cores, 256 GB memory

● Specifications of NVIDIA Tesla V100 16GB are as follows:– NVIDIA Tesla V100 16GB x 1, 4 CPU cores, 32 GB memory– NVIDIA Tesla V100 16GB x 2, 8 CPU cores, 64 GB memory– NVIDIA Tesla V100 16GB x 4, 16 CPU cores, 128 GB memory

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 32

Page 36: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– NVIDIA Tesla V100 16GB x 8, 32 CPU cores, 256 GB memory● Specifications of NVIDIA Tesla P4 8GB are as follows:

– NVIDIA Tesla P4 8GB x 1, 4 CPU cores, 32 GB memory– NVIDIA Tesla P4 8GB x 2, 8 CPU cores, 64 GB memory– NVIDIA Tesla P4 8GB x 3, 16 CPU cores, 128 GB memory– NVIDIA Tesla P4 8GB x 4, 32 CPU cores, 256 GB memory

If the pod type is not GPU-accelerated, the container specifications you selectmust meet the following requirements:● The total number of CPU cores in a pod can be a value in the range of

0.25-32, 48, or 64. The total number of CPU cores in a container is an integermultiple of 0.25.

● The total memory size (in GB) of a pod is an integer from 1 to 512.● The ratio of CPU cores to memory size in a pod ranges from 1:2 to 1:8.● A pod can have a maximum of five containers. The minimum configuration of

a container is 0.25 cores and 0.2 GB. The maximum configuration of acontainer is the same as that of a pod.

Step 3 Modify container settings.

1. Click Change Image to select a new image.

Figure 4-13 Changing the image

– My Images: images you have uploaded to SWR– Official Docker Images: public images on Docker Hub– Shared Images: images shared by others through SWR

2. After the image is selected, select the image version and set the containername and CPU and memory specifications (the minimum configuration of asingle container is 0.25 cores and 0.2 GB). You can also choose to enable thecollection of standard output files. If file collection is enabled, AOM bills youfor the log storage space that you use.

AOM provides each account 500 MB log storage space for free each month. AOM billsextra space on a pay-per-use basis. For details, see Pricing Details.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 33

Page 37: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Only one container in a pod can use GPUs. If your pod has multiplecontainers, you can specify the container that can use GPUs by enabling theGPU option.You can also configure the following advanced settings for a container:– Storage: You can mount persistent volumes into containers to persist

data files. Currently, EVS, SFS, and SFS Turbo volumes are supported. Clickthe EVS Volumes, SFS Volumes, or SFS Turbo Volumes tab, and set thevolume name, capacity, container path, and disk type. After the workloadis created, you can manage storage volumes. For details, see 6.2 EVSVolumes, 6.4 SFS Volumes, or 6.5 SFS Turbo Volumes.

Currently, SFS Turbo volumes are unavailable in region CN East-Shanghai1.

– Log Collection: Application logs will be collected to the path you set. Youneed to configure policies to prevent logs from being over-sized. ClickAdd Log Storage, enter a container path for storing logs, and set theupper limit of log file size. After the workload is created, you can viewlogs on the AOM console. For details, see 8 Log Management.

– Environment Variables: You can manually set environment variables oradd variable references. Environment variables add flexibility to workloadconfiguration. The environment variables for which you have assignedvalues during container creation will take effect when the container isrunning. This saves you the trouble of rebuilding the container image.To manually set variables, enter the variable name and value.To reference variables, set the variable name, reference type, andreferenced value for each variable. The following variables can bereferenced: PodIP (pod IP address), PodName (pod name), and Secret.For details about how to create a secret reference, see 7.2 Secrets.

– Health Check: Container health can be checked regularly duringcontainer running. For details about how to configure health checks, see4.8 Health Check.

– Lifecycle: Lifecycle scripts specify actions that applications take when alifecycle event occurs. For details about how to configure the scripts, see4.7 Container Lifecycle Hook.

– Startup Commands: You can set the commands to be executedimmediately after the container is started. Startup commands correspondto Docker's ENTRYPOINT startup instructions. For details, see 4.6 SettingContainer Startup Command.

– Configuration Management: You can mount ConfigMaps and secrets toa container. For details about how to create ConfigMaps and secrets, see7.1 ConfigMaps and 7.2 Secrets.

Step 4 Click Next and select an upgrade policy.

Two options are available: Rolling upgrade and In-place upgrade.● Rolling upgrade: Gradually replaces an old pod with a new pod. During the

upgrade, service traffic is evenly distributed to the old and new pods to ensureservice continuity.Maximum Number of Unavailable Pods: Maximum number of unavailablepods allowed in a rolling upgrade. If the number is equal to the total number

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 34

Page 38: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

of pods, services may be interrupted. Minimum number of alive pods = Totalpods – Maximum number of unavailable pods

● In-place upgrade: Deletes an old pod and then creates a new one. Serviceswill be interrupted during the upgrade.

Step 5 Click Next and then Submit.

----End

Upgrading a Workload Using kubectlFor details about how to use kubectl to upgrade a workload, see Upgrading aDeployment in Deployment.

4.11 Scaling a WorkloadThis section describes two workload scaling methods: auto scaling and manualscaling. You can select a scaling method as required.

● Auto scaling: Supports metric-based, scheduled, and periodic policies. Whenthe configuration is complete, pods can be automatically added or deletedbased on resource changes or a specified schedule.

● Manual scaling: Increases or decreases the number of pods immediately afterthe configuration is complete.

NO TICE

If a pod mounted with an EVS volume is deleted, the EVS disk will not be deleted.If a new pod with the same name is created, the new pod cannot be mountedwith any EVS volume.

Auto Scaling

Currently, auto scaling is supported only for Deployments.

A properly configured auto scaling policy eliminates the need to manually adjustresources in response to service changes and traffic peaks, helping you reducemanpower and resource consumption. Currently, CCI supports the following typesof auto scaling policies:

Metric-based: Scales the workload based on CPU/memory usage. You can specifya CPU/memory usage threshold. If the usage is higher or lower than the threshold,instances can be automatically added or deleted.

Scheduled: Scales the workload at a specified time. A scheduled scaling policy issuited for scenarios such as flash sales and anniversary promotions.

Periodic: Scales the workload on a daily, weekly, or monthly basis. A periodicscaling policy is suited for applications that have periodic traffic changes.

● Configure a metric-based auto scaling policy.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 35

Page 39: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

a. Log in to the CCI console. In the navigation pane, choose Workloads >Deployments. On the page displayed on the right, click the name of thetarget Deployment.

b. In the Scaling area of the Deployment details page, click Auto Scalingand then click Add Scaling Policy.

Figure 4-14 Adding a metric-based auto scaling policy

Table 4-5 Parameters of a metric-based auto scaling policy

Parameter Description

Policy Name Name of a policy.

Policy Type Select Metric-based policy.

TriggerCondition

Select CPU usage or Memory usage.If you set the trigger condition to average memoryusage > 70%, the scaling policy will be triggeredwhen the average memory usage exceeds 70%.

Duration Statistical period. Select a value from the drop-downlist.If the value is set to 60, metric statistics are collectedevery 60 seconds.

ConsecutiveTimes

If this parameter is set to 3, the configured action willbe triggered when the threshold is reached for 3consecutive statistical periods.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 36

Page 40: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Parameter Description

Policy Action Action to be executed when the policy is triggered.The action can be to add or reduce the number ofinstances.

c. Click Confirm.

The policy is added to the policy list, and its status is Enabled.

Figure 4-15 Policy enabled

When the trigger condition is met, the auto scaling policy will beexecuted.

● Configure a scheduled auto scaling policy.

a. In the Scaling area, click Auto Scaling and then click Add Scaling Policy.

Figure 4-16 Adding a scheduled auto scaling policy

Table 4-6 Parameters of a scheduled auto scaling policy

Parameter Description

Policy Name Name of a policy.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 37

Page 41: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Parameter Description

Policy Type Select Scheduled Policy.

Triggered Time when the policy is triggered.

Policy Action Action to be executed when the policy is triggered.The action can be to add or reduce the number ofinstances.

b. Click Confirm.

The policy is added to the policy list, and its status is Enabled.● Configure a periodic auto scaling policy.

a. In the Scaling area, click Auto Scaling and then click Add Scaling Policy.

Figure 4-17 Adding a periodic auto scaling policy

Table 4-7 Parameters of a periodic auto scaling policy

Parameter Description

Policy Name Name of a policy.

Policy Type Select Periodic Policy.

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 38

Page 42: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Parameter Description

Select Time Time range and specific time when the policy istriggered.

Policy Action Action to be executed when the policy is triggered.

b. Click Confirm.

The policy is added to the policy list, and its status is Enabled.

Manual Scaling

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads >Deployments. On the page displayed on the right, click the name of the targetDeployment.

Step 2 Under Manual Scaling in the Scaling area, click and modify the number ofinstances (for example, change the value to 3), and then click Save. The scalingtakes effect immediately.

CCI provides a time window for running pre-stop processing commands before anapplication is deleted. If a command process is still running when the timewindow expires, the application will be forcibly deleted.

Figure 4-18 Changing the number of instances

Step 3 In the pod list, you can see that new pods are being created. When the status ofall added pods becomes Running, the scaling is completed successfully.

Figure 4-19 Pod list after a manual scaling

----End

Cloud Container InstanceUser Guide 4 Workload

2020-04-14 39

Page 43: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

5 Workload Network Access

5.1 Network Access OverviewWorkload access scenarios can be categorized as follows:

● 5.2 Private Network Access: Access to HUAWEI CLOUD resources.– Service: This mode is used when workloads in the same namespace need

to access each other.– ELB (private network load balancer): This mode is used when a workload

and other HUAWEI CLOUD resources (such as ECSs) in the same VPC asthe workload need to access each other. In addition, this mode can beused when workloads in the same VPC but different namespaces need toaccess each other. In this mode, the current workload can be accessedusing Internal domain name or Load balancer's IP address:Port. TheHTTP/HTTPS and TCP/UDP protocols are supported. If other HUAWEICLOUD resources are in a VPC different from the current workload, youcan also create a VPC peering connection to enable communicationbetween VPCs.

● 5.3 Public Network Access: A workload can be accessed from publicnetworks through a load balancer. The load balancer must be in the sameVPC as the workload.

● 5.4 Accessing Public Networks from a Container: Containers can accesspublic networks by using SNAT rules, which are configured on the NATGateway.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 40

Page 44: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 5-1 Network access diagram

5.2 Private Network AccessThe following two modes are available for private network access:● Workload Access Through a Service: This mode is used when workloads in

the same namespace need to access each other.● Workload Access Through a Private Network Load Balancer: This mode is

used when a workload and other HUAWEI CLOUD resources (such as ECSs) inthe same VPC as the workload need to access each other. In addition, thismode can be used when workloads in the same VPC but different namespacesneed to access each other. In this mode, the current workload can be accessedusing Internal domain name or Load balancer's IP address:Port. The HTTP/HTTPS and TCP/UDP protocols are supported. If other HUAWEI CLOUDresources are in a VPC different from the current workload, you can alsocreate a VPC peering connection to enable communication between VPCs.

Pod is the smallest resource unit in the workload. Accessing a workload is toaccess the pods in the workload. Pods in a workload can be dynamically createdand destroyed, for example, during capacity scaling or rolling upgrade. In this case,the pod addresses will change, which makes it inconvenient to access pods.

To solve this problem, CCI provides the coredns add-on (used for internal domainname resolution). Pod changes are managed by workloads and not perceivedexternally.

A workload can be accessed using Service name:Workload access port, where theaccess port is mapped to the container port. As shown in the following figure, if

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 41

Page 45: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

the pod in the frontend needs to access the pods in the backend, the former onlyneeds to access nginx:8080.

Figure 5-2 Workload access through the service

Setting Service-based Workload Access When Creating a Workload

To enable a workload to be accessed through Service name:Workload access port,configure the following parameters when creating the workload:

● Service Name: name of a service, which is an object for managing podaccess. For more details, see Service.

● coredns: Specifies whether to install the coredns add-on. The coredns add-onresolves internal domain names of workloads. If this add-on is not installed,the workload cannot be accessed through Service name:Workload access port.

● Workload Port Settings:

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 42

Page 46: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– Protocol: Specifies the protocol used to access the workload. Select TCPor UDP.

– Workload Access Port: Specifies the port for accessing the workload.– Container Port: Specifies the port on which the container listens. The

workload access port will be mapped to the container port.

Figure 5-3 Configuring service-based access parameters

Setting Service-based Workload Access After a Workload Is CreatedYou can configure service-based access settings after a workload is created. Thesettings have no impact on the workload status and take effect immediately. Theprocedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Services. On the page displayed on the right, click Create Service.

Step 2 On the Create Service page, select ClusterIP for Access Type.

Step 3 Set intra-cluster access parameters.● Service Name: Specifies the name of a service, which is an object for

managing pod access.● Namespace: Specifies the namespace to which the workload belongs.● Workload: Select a workload for which you want to add the service.● Port Settings:

– Protocol: Specifies the protocol used to access the workload. Select TCPor UDP.

– Access Port: Specifies the port for accessing the workload.– Container Port: Specifies the port on which the container listens. The

workload access port will be mapped to the container port.

Step 4 Click Submit. The intra-cluster access service will be added for the workload.

----End

Creating a Service Using kubectlFor details, see Service.

Workload Access Through a Private Network Load BalancerIf a workload needs to be accessed by other HUAWEI CLOUD resources or CCIworkloads in other namespaces, bind an enhanced load balancer of the private

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 43

Page 47: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

network type to the workload during creation. In this way, the workload can beaccessed by using the virtual IP address of the load balancer.

When configuring access settings, select a private network load balancer andfollow the description in 5.3 Public Network Access to configure otherparameters.

Figure 5-4 Setting ELB-based workload access when creating a workload

Figure 5-5 Setting ELB-based workload access after a workload is created

Setting Ingress-based Workload Access

You can configure ingress-based access settings after a workload is created. Thesettings have no impact on the workload status and take effect immediately. Theprocedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Ingresses. On the page displayed on the right, click Create Ingress.

Step 2 Set ingress parameters.● Ingress Name: Enter a custom ingress name.● Namespace: Select the namespace to which the ingress is to be added.● Enhanced Load Balancer: A load balancer automatically distributes Internet

access traffic to multiple nodes running the workload.● External Port: Port number that is open to the ELB service address. The port

number can be specified randomly.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 44

Page 48: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● Front-End Protocol: HTTP and HTTPS are available. If you select HTTPS, alsochoose a key certificate. For details about the certificate format, seeCertificate Format.

– The key certificate ingress-test-secret.yaml is required only when HTTPS isselected. For details on how to create a key, see 7.2 Secrets.

– If there is already an HTTPS ingress for the chosen port on the load balancer, thecertificate of the new HTTPS ingress must be the same as the certificate of theexisting ingress. This means that a listener has only one certificate. If twocertificates, each with a different ingress, are added to the same listener of thesame load balancer, only the earliest certificate takes effect on the load balancer.

● Domain Name: optional. It indicates the actual domain name used for access.You are expected to buy the domain name and complete ICP filing for it.Ensure that the domain name can be resolved into the service address of theselected load balancer. If a domain name rule is configured, the domain namemust always be used for access.

● Ingress Rule– Rule Matching: Currently, only Prefix match is supported.

Prefix match: If the mapping URL is /healthz, the URL that meets theprefix can be accessed. For example, /healthz/v1 and /healthz/v2.

– URL: Access path to be registered.– Service Name: Select the service whose ingress is to be added.– Service Port: Port on which the container in the container image listens.

Step 3 Click Submit.

After the ingress is created, it is displayed in the ingress list.

----End

Updating a ServiceAfter adding a service, you can update the port configuration of the service. Theprocedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Services. On the Services page, select the corresponding namespace, and clickUpdate in the row where the service to be updated resides.

Step 2 On the Update Service page, select ClusterIP for Access Type.

Step 3 Update intra-cluster access parameters.● Cluster Name: Name of the cluster where the workload runs. The value is

inherited from the workload creation page and cannot be changed.● Namespace: Namespace where the workload is located. The value is inherited

from the workload creation page and cannot be changed.● Workload: Workload for which you want to add a service.● Port Settings:

– Protocol: Select a protocol used by the service.– Container Port: Port on which the workload listens. The Nginx workload

listens on port 80.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 45

Page 49: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– Access Port: Port mapped to the container port at the cluster-internal IPaddress. The workload can be accessed at <cluster-internal IPaddress>:<access port>. The port number range is 1–65535.

Step 4 Click Update. The service will be updated for the workload.

----End

Updating an Ingress

After adding an ingress, you can update its port, domain name, and routeconfiguration. The procedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Ingresses, select the corresponding namespace, and click Update in the rowwhere the ingress to be updated resides.

Step 2 On the Update Ingress page, set the following parameters as follows:● External Port: Port number that is open to the ELB service address. The port

number can be specified randomly.● Domain Name: optional. It indicates the actual domain name to be accessed.

You are expected to buy the domain name and complete ICP filing for it.Ensure that the domain name can be resolved into the service address of theselected load balancer. If a domain name rule is configured, the domain namemust always be used for access.

● Ingress Rule: You can click Add Ingress Rule to add a rule.– Rule Matching: Currently, only Prefix match is supported.

Prefix match: If the mapping URL is /healthz, the URL that meets theprefix can be accessed. For example, /healthz/v1 and /healthz/v2.

– URL: Access path to be registered, for example, /healthz.– Service Name: Select the service whose ingress is to be updated.– Service Port: Port on which the container in the container image listens.

Step 3 Click Update. The ingress will be updated for the workload.

----End

5.3 Public Network AccessCCI allows workloads to be accessed from public networks. To implement thisaccess, you need to bind an enhanced load balancer to a workload. The loadbalancer must be in the same VPC as the workload. Currently, both layer-4 andlayer-7 public network access are supported.

● TCP and UDP are supported for layer-4 public network access. Afterconfiguration is complete, the workload can be accessed using Public networkIP address of the load balancer:Load balancer port.

● HTTP and HTTPS are supported for layer-7 public network access. Afterconfiguration is complete, the workload can be accessed using http://Publicnetwork domain name or public network IP address of the load balancer:Loadbalancer port/Mapping path.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 46

Page 50: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Services forward requests using layer-4 TCP and UDP protocols. Ingresses forwardrequests using layer-7 HTTP and HTTPS protocols. Domain names and paths canbe used to achieve finer granularities, as shown in the following figure.

Figure 5-6 Ingress-Service

The following figure shows an example of accessing a workload using the HTTPprotocol.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 47

Page 51: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 5-7 Public network access

Setting Public Network Access When Creating a Workload

During workload creation, select Internet access for Access Type and configurethe following parameters:

● Service Name: Specifies the name of a service, which is an object formanaging pod access. For more details, see Service.

● coredns: Specifies whether to install the coredns add-on. The coredns add-onresolves internal domain names of workloads. If this add-on is not installed,the workload cannot be accessed through Service name:Workload access port.

● Load Balancer: Select an enhanced load balancer. If no enhanced loadbalancer is available, click Create an enhanced load balancer to create one.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 48

Page 52: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

NO TICE

The enhanced load balancer to be created must be in the same VPC as theworkload.

● ELB Protocol: Communication protocol for public network access, which canbe HTTP/HTTPS or TCP/UDP.

● Ingress Name: Specifies the name of the ingress, which is an object formanaging layer-7 access. If this parameter is not set, the workload name willbe used as the ingress name by default. For more details, see Ingress.

● Public Domain Name (configurable when the HTTP/HTTPS protocol is used):To access the workload using a domain name, you need to purchase a publicdomain name and point the resolved domain name to the EIP of the selectedload balancer.

● Certificate (mandatory when the HTTPS protocol is selected): For detailsabout how to import an SSL certificate, see 7.3 SSL Certificates.

● ELB Port: Select the protocol and port for accessing the workload using theload balancer.

● Workload Port Protocol: Communication protocol for accessing theworkload, which can be TCP or UDP. If the ELB protocol is set to HTTP/HTTPS, the workload port protocol will be TCP.

● Workload Port Settings:– Workload Access Port: Specifies the port for accessing the workload.– Container Port: Specifies the port on which the container listens. The

workload access port will be mapped to the container port.● HTTP Route Settings:

– Mapping Path: Path to be accessed. It must start with a slash (/). Forexample, /api/web. It can also be the root path /.

– Workload Access Port: Previously configured workload access port.

As shown in Figure 5-8, if the IP address of the load balancer is 10.10.10.10, youcan access the workload by visiting http://10.10.10.10:6071/.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 49

Page 53: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 5-8 Configuring public network access parameters

Setting Public Network Access After a Workload Is Created

You can configure service-based access settings after a workload is created. Thesettings have no impact on the workload status and take effect immediately. Theprocedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Services. On the page displayed on the right, click Create Service.

Step 2 On the Create Service page, select LoadBalancer for Access Type.

Step 3 Set ELB-based access parameters.● Service Name: Specifies the name of a service, which is an object for

managing pod access.● Namespace: Specifies the namespace to which the workload belongs.● Workload: Select a workload for which you want to add the service.● Enhanced Load Balancer: Select a public network load balancer. If no load

balancer is available, click Create Load Balancer to create one.

NO TICE

The enhanced load balancer to be created must be in the same VPC as theworkload.

● Port Settings:

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 50

Page 54: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– Protocol: Specifies the protocol used to access the workload. Select TCPor UDP.

– Access Port: Specifies the port for accessing the workload.– Container Port: Specifies the port on which the container listens. The

workload access port will be mapped to the container port.

Figure 5-9 Setting public network access after a workload is created

Step 4 Click Submit. The LoadBalancer service will be added for the workload.

----End

Setting Ingress-based Workload AccessYou can configure ingress-based access settings after a workload is created. Thesettings have no impact on the workload status and take effect immediately. Theprocedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Ingresses. On the page displayed on the right, click Create Ingress.

Step 2 Set ingress parameters.● Ingress Name: Enter a custom ingress name.● Namespace: Select the namespace to which the ingress is to be added.● Enhanced Load Balancer: A load balancer automatically distributes Internet

access traffic to multiple nodes running the workload.● External Port: Port number that is open to the ELB service address. The port

number can be specified randomly.● Front-End Protocol: HTTP and HTTPS are available. If you select HTTPS, also

choose a key certificate. For details about the certificate format, seeCertificate Format.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 51

Page 55: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

– The key certificate ingress-test-secret.yaml is required only when HTTPS isselected. For details on how to create a key, see 7.2 Secrets.

– If there is already an HTTPS ingress for the chosen port on the load balancer, thecertificate of the new HTTPS ingress must be the same as the certificate of theexisting ingress. This means that a listener has only one certificate. If twocertificates, each with a different ingress, are added to the same listener of thesame load balancer, only the earliest certificate takes effect on the load balancer.

● Domain Name: optional. It indicates the actual domain name to be accessed.You are expected to buy the domain name and complete ICP filing for it.Ensure that the domain name can be resolved into the service address of theselected load balancer. If a domain name rule is configured, the domain namemust always be used for access.

● Ingress Rule– Rule Matching: Currently, only Prefix match is supported.

Prefix match: If the mapping URL is /healthz, the URL that meets theprefix can be accessed. For example, /healthz/v1 and /healthz/v2.

– URL: Access path to be registered.– Service Name: Select the service whose ingress is to be added.– Service Port: Port on which the container in the container image listens.

Step 3 Click Submit.

After the ingress is created, it is displayed in the ingress list.

----End

Troubleshooting the Failure to Access a Workload from the Public Network1. A workload can be accessed from the public network only when it is in the

running state. If your workload is abnormal or not ready, it cannot beaccessed from the public network.

2. It may take 1 to 3 minutes from the time when the workload is created to thetime when the workload is ready for public network access. During thisduration, the network route has not been configured. As a result, theworkload cannot be accessed from the public network.

3. If the workload cannot be accessed 3 minutes after being created, click theworkload. On the details page that is displayed, choose Access Settings tocheck whether any alarm events are reported. The following are two commonevents:– Listener port is repeated: This event occurs when you delete a workload

for which a load balancer port is configured, and immediately after that,create a workload using the same load balancer port. It takes some timefor a load balancer port to be deleted. You are advised to delete theworkload and create it again or wait for 5–10 minutes until the Internetaccess can be normally used.

– Create listener failed: This event occurs usually because that the listenerquota is exceeded. Select another load balancer with a sufficient quota.

4. The workload is inaccessible 3 minutes after it is created, and there is noalarm event. The possible reason is that no corresponding process is actually

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 52

Page 56: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

listening to the user-configured container port. Currently, CCI cannot detectthis type of exception. You need to check whether the image is listening tothis container port. If the container port is properly listened to, the accessfailure may lie in the load balancer. In this case, check the status of the loadbalancer.

Enabling Public Network Access Using kubectl

To enable the access to a workload from the public network, two Kubernetesobjects (that is, service and ingress) are required. For details, see Service andIngress.

Updating a Service

After adding a service, you can update the port configuration of the service. Theprocedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Services. On the Services page, select the corresponding namespace, and clickUpdate in the row where the service to be updated resides.

Step 2 On the Update Service page, select LoadBalancer for Access Type.

Step 3 Update load balancing parameters.● Cluster Name: Name of the cluster where the workload runs. The value is

inherited from the workload creation page and cannot be changed.● Namespace: Namespace where the workload is located. The value is inherited

from the workload creation page and cannot be changed.● Workload: Workload for which you want to update the service.● Load Balancer: The value is inherited from the workload creation page and

cannot be changed.● Port Settings:

– Protocol: Specifies the protocol used to access the workload. Select TCPor UDP.

– Access Port: Specifies the port for accessing the workload.– Container Port: Specifies the port on which the container listens. The

workload access port will be mapped to the container port.

Step 4 Click Submit. The service will be updated for the workload.

----End

Updating an Ingress

After adding an ingress, you can update its port, domain name, and routeconfiguration. The procedure is as follows:

Step 1 Log in to the CCI console. In the navigation pane, choose Network Management> Ingresses, select the corresponding namespace, and click Update in the rowwhere the ingress to be updated resides.

Step 2 On the Update Ingress page, set the following parameters as follows:

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 53

Page 57: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● External Port: Port number that is open to the ELB service address. The portnumber can be specified randomly.

● Domain Name: optional. It indicates the actual domain name to be accessed.You are expected to buy the domain name and complete ICP filing for it.Ensure that the domain name can be resolved into the service address of theselected load balancer. If a domain name rule is configured, the domain namemust always be used for access.

● Ingress Rule: You can click Add Ingress Rule to add a rule.– Rule Matching: Currently, only Prefix match is supported.

Prefix match: If the mapping URL is /healthz, the URL that meets theprefix can be accessed. For example, /healthz/v1 and /healthz/v2.

– URL: Access path to be registered, for example, /healthz.– Service Name: Select the service whose ingress is to be updated.– Service Port: Port on which the container in the container image listens.

Step 3 Click Update. The ingress will be updated for the workload.

----End

5.4 Accessing Public Networks from a ContainerYou can use the NAT Gateway service to enable container instances in a VPC toaccess public networks. The NAT Gateway service provides source network addresstranslation (SNAT), which translates private IP addresses to a public IP address bybinding an elastic IP address (EIP) to the gateway, providing secure and efficientaccess to the Internet. Figure 5-10 shows the SNAT architecture. The SNATfunction allows the container instances in a VPC to access the Internet withoutbeing bound to an EIP. SNAT supports a large number of concurrent connections,which makes it suitable for applications involving a large number of requests andconnections.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 54

Page 58: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 5-10 SNAT

To enable container instances to access the Internet, perform the following steps:

Step 1 Buy an EIP.

1. Log in to the management console.

2. Click in the upper left corner to select the desired region and project.3. Choose Service List > Network > Virtual Private Cloud.4. In the navigation pane, choose Elastic IP and Bandwidth > EIPs.5. On the EIPs page, click Buy EIP.6. Set parameters as required.

Set Region to the region where container instances are located.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 55

Page 59: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 5-11 Buying an EIP

Step 2 Buy a NAT gateway. For details, see Buying a NAT Gateway.

1. Log in to the management console.

2. Click in the upper left corner to select the desired region and project.

3. Choose Service List > Network > NAT Gateway.

4. On the NAT Gateway page, click Buy NAT Gateway.

5. Set parameters as required.

Select the VPC and subnet that you have configured for the namespace of containerinstances.

Figure 5-12 Buying a NAT gateway

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 56

Page 60: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Step 3 Configure an SNAT rule and bind the EIP to the subnet. For details, see Adding anSNAT Rule.

1. Log in to the management console.

2. Click in the upper left corner to select the desired region and project.3. Choose Service List > Network > NAT Gateway.4. On the displayed page, click the name of the NAT gateway for which you

want to add the SNAT rule.5. On the SNAT Rules tab page, click Add SNAT Rule.6. Set parameters as required.

Select the subnet that you have configured for the namespace of container instances.

Figure 5-13 Adding an SNAT rule

After the SNAT rule is configured, public networks can be accessed from thecontainer. As shown in the following figure, public networks can be pinged fromthe container.

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 57

Page 61: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 5-14 Accessing public networks from a container

----End

Cloud Container InstanceUser Guide 5 Workload Network Access

2020-04-14 58

Page 62: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

6 Storage Management

6.1 OverviewCCI supports multiple types of persistent storage to meet your requirements indifferent scenarios. You can use the following types of storage volumes whencreating a workload:

● Elastic Volume Service (EVS) volumes

You can mount an EVS volume into a container path. When the container ismigrated, the mounted EVS volume is migrated together. EVS volumes aresuited for persistent data storage. For details, see 6.2 EVS Volumes.

When using EVS volumes to store data, pay attention to the following points.Otherwise, pods cannot run properly.

● An EVS volume cannot be mounted into multiple pods.

● A pod cannot be mounted into EVS volumes with multiple partitions.

● Scalable File Service (SFS) volumes

You can create SFS volumes and mount them to specific container paths. Thevolumes created by the underlying SFS service can also be used. SFS volumesare suited for workload scenarios where data needs to be persisted and readby and written to multiple nodes. Such scenarios include media processing,content management, big data analysis, and workload analysis. For details,see 6.4 SFS Volumes.

● SFS Turbo volumes

You can create SFS Turbo volumes and mount them to specific containerpaths. SFS Turbo volumes are fast, on-demand, and scalable. They aresuitable for DevOps, containerized microservices, and enterprise officeapplications. For details, see 6.5 SFS Turbo Volumes.

Currently, SFS Turbo volumes are unavailable in region CN East-Shanghai1.

● Object Storage Service (OBS) volumes

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 59

Page 63: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

You can mount OBS volumes to specific container paths. OBS is a cloudstorage service that provides massive, secure, highly reliable, and low-costdata storage capabilities. For details, see 6.3 OBS Volumes.

PersistentVolumeClaim (PVC)

CCI uses PVCs to apply for and manage persistent storage. With PVCs, you onlyneed to specify the type and capacity of storage resources without concerningabout how to create and release underlying storage resources.

In practice, you can bind a PVC to the volume in a pod and use persistent storagethrough the PVC, as shown in Figure 6-1.

Figure 6-1 Using persistent storage

On the CCI console, you can import existing EVS disks, SFS file systems, and SFSTurbo file systems. When importing such a storage resource, CCI creates a PVC forthe resource.

You can also purchase EVS disks and SFS file systems on the CCI console. Afterthese storage resources are purchased, CCI will create PVCs for them and importthem.

6.2 EVS VolumesTo meet data persistency requirements, CCI allows EVS disks to be mounted tocontainers. By using EVS disks, you can mount the remote file directory of astorage system into a container so that data in the volume is permanentlypreserved. Even if the container is deleted, only the mounted volume is deleted.Data in the volume is still stored in the storage system.

EVS supports three specifications: common I/O, high I/O, and ultra-high I/O.

● Common I/O: The back-end storage is provided by the SATA storage media. Itis suitable for high-capacity application scenarios with low read/write raterequirements and less transaction processing, such as development, testing,and enterprise office applications.

● High I/O: The back-end storage is provided by the SAS storage medium. It issuitable for application scenarios with relatively high performance, high read/write rate requirements, and real-time data storage requirements, such ascreating file systems and distributed file sharing.

● Ultra-high I/O: The back-end storage is provided by the SSD storage medium.It is suitable for application scenarios with high performance, high read/writerate requirements, and data-intensive requirements, such as NoSQL, relationaldatabase, and data warehouses (such as Oracle RAC and SAP HANA).

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 60

Page 64: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Constraints● The following EVS disks cannot be imported: not located in the current AZ,

unavailable disk, system disk, CCE-associated disk, non-SCSI disk, non-shareddisk, dedicated disk, frozen disk, and HANA server dedicated disk (high I/Operformance optimization/ultra-high I/O latency optimization).

● An EVS volume can be used only as a new disk. The content in the EVSvolume that has never been mounted to CCI is invisible to the container.

● If an imported EVS disk is deleted from the EVS console, it cannot beperceived by CCI. You are advised to delete the EVS disk after confirming thatit is not used by any workload.

● An EVS volume can be mounted to only one pod. Otherwise, data may belost.

Adding EVS DisksStep 1 Log in to the CCI console. In the navigation pane, choose Storage > EVS.

● If you have purchased EVS disks on the EVS console, go to Step 2.● If you have not purchased any EVS volume, go to Step 3.

Step 2 Click Import. On the Import EVS Disk page, select one or more EVS disks that youwant to import and click Import.

An EVS disk can be imported into only one namespace. If an EVS disk has been importedinto a namespace, it is invisible in other namespaces and cannot be imported again. If youwant to import an EVS disk that has file system (ext4) formatted, ensure that nopartition has been created for the disk. Otherwise, data may be lost.

After the EVS disk is imported, you can see the corresponding volume.

Figure 6-2 Import result

Step 3 Click Buy EVS Volume. On the Buy EVS Volume page, set parameters, click Next,confirm the specifications, and click Submit.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 61

Page 65: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● PVC Name: name of a PVC.● Namespace: namespace to which the PVC belongs.● Type: disk type, which can be common I/O, high I/O, or ultra-high I/O.● Capacity: disk capacity, which ranges from 10 to 1,000 GB.● AZ: availability zone to which the disk belongs.● Encryption: KMS Encryption is deselected by default. If KMS Encryption is

selected, set the following parameters:

Currently, the encryption function is unavailable in region CN East-Shanghai1.

– Agency Name: Agencies can be used to assign permissions to trustedaccounts or cloud services for a specific period of time. If no agency iscreated, click Create Agency. The agency name EVSAccessKMS indicatesthat EVS is granted the permission to access KMS and can obtain KMSkeys to encrypt and decrypt EVS disks.

– Key Name: A secret is a type of resource that holds user-defined,sensitive data, such as authentication and key information. Secrets can beloaded into containerized applications. For details on how to create asecret, see Creating a CMK.

– Key ID: generated by default.

----End

Using EVS VolumesAfter selecting a container in Creating a Deployment, expand Advanced Settings> Storage, click the EVS Volumes tab, and click Add EVS Volume.

Figure 6-3 Configuring EVS volume parameters

EVS volumes can be mounted only to workloads that contain one container.

After a workload is created, you can view the relationship between the EVS diskand the workload by choosing Storage > EVS.

Figure 6-4 Managing EVS volumes

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 62

Page 66: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Creating EVS Volumes Using kubectl

For details, see Using Persistent Storage.

6.3 OBS VolumesCCI allows you to mount OBS volumes into containers.

OBS is a cloud storage service that provides massive, secure, highly reliable, andlow-cost data storage capabilities. For more information, see the Object StorageService.

Constraints● Currently, OBS volumes can be mounted only to job resources.

● Exercise caution when deleting an OBS bucket. After an OBS bucket isdeleted, containers in CCI are unavailable.

Importing OBS Buckets

CCI allows you to import existing OBS buckets. To ensure the reliability andstability of OBS volumes, configure keys before importing OBS buckets. For details,see Access Keys.

Step 1 Log in to the CCI console. In the navigation pane, choose Storage > OBS. On thepage displayed on the right, select a namespace and click Upload Key.

Step 2 Select a local key file and click Confirm.

Add a CSV file whose size does not exceed 2 MB. If you do not have an access keylocally, choose My Credentials > Access Keys and add and download an accesskey.

Step 3 On the OBS page, click Import.

Step 4 Select one or more OBS buckets that you want to import. Then, click Import.

If no OBS buckets are available, click create an OBS volume to create one. On theOBS console, choose Parallel File System. Click Create Parallel File System, setparameters, and click Create Now.

After the OBS bucket is created, go back to the Import OBS Bucket page on theCCI console. Then, select the created OBS bucket and click Import.

----End

Using OBS Volumes

After selecting a container in Creating a Job, expand Advanced Settings >Storage, click the OBS Volumes tab, and click Add OBS Volume.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 63

Page 67: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 6-5 Configuring OBS volume parameters

To use existing OBS volumes, import OBS buckets in advance. For details, seeImporting OBS Buckets.

6.4 SFS VolumesCCI allows you to create SFS volumes and mount them into containers. Currently,SFS file systems of only the Network File System (NFS) type are supported. SFSvolumes are applicable to a wide range of scenarios, including media processing,content management, big data, and application analysis.

Constraints● If an SFS file system is in use, do not modify the VPC configuration associated

with the SFS file system. Otherwise, the containers in CCI cannot access theSFS file system.

● Exercise caution when deleting an SFS file system. After an SFS file system isdeleted, containers in CCI are unavailable.

Importing SFS File SystemsCCI allows you to import existing SFS file systems.

Step 1 Log in to the CCI console. In the navigation pane, choose Storage > SFS.● If you have created SFS file systems on the SFS console, go to Step 2.● If you have not created any SFS file system, go to Step 3.

Step 2 Click Import. On the Import SFS File System page, select one or more filesystems that you want to import and click Import.

Step 3 Click Create SFS Volume, set parameters, and click Submit.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 64

Page 68: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● PVC Name: name of a PVC.

● Namespace: namespace to which the PVC belongs.

● Type: type of the SFS file system. Currently, only NFS is supported.

● Total Capacity (GB): Select the required capacity. If this volume has enabledauto capacity expansion, it can be automatically scaled without capacity limit.

● Access Mode: mode to access to the SFS volume. Currently, onlyReadWriteMany is supported. To be specific, the SFS volume can be read byand written to multiple nodes.

● Encryption: KMS Encryption is deselected by default. If KMS Encryption isselected, set the following parameters:

Currently, the encryption function is unavailable in region CN East-Shanghai1.

– Agency Name: Agencies can be used to assign permissions to trustedaccounts or cloud services for a specific period of time. If no agency iscreated, click Create Agency. The agency name EVSAccessKMS indicatesthat EVS is granted the permission to access KMS and can obtain KMSkeys to encrypt and decrypt EVS disks.

– Key Name: A secret is a type of resource that holds user-defined,sensitive data, such as authentication and key information. Secrets can beloaded into containerized applications. For details on how to create asecret, see Creating a CMK.

– Key ID: generated by default.

Step 4 Specify the mount option for the SFS volume to ensure real-time data access. If anSFS volume is mounted into more than one pod, there is a delay in pod metadataaccess due to local caching in pods.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 65

Page 69: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

You can set mount options for specific SFS volumes. Currently, only the noacmount option is supported. This option is used to disable local file and directorycaching, allowing pods to access data from the SFS volume in real time.

● The mount option is valid only for SFS volumes created in the current namespace.● Currently, the mount option configuration is unavailable in region CN East-Shanghai1.

Figure 6-6 Setting the mount option for an SFS volume

----End

Using SFS VolumesAfter selecting a container image in 4.2 Deployment, Creating a Job, or Creatinga Cron Job, expand Advanced Settings > Storage, click the SFS Volumes tab, andclick Add SFS Volume.

Figure 6-7 Configuring SFS volume parameters

subPath is a sub-directory in the root path of the SFS file system. If such a sub-directorydoes not exist, it is automatically created in the SFS file system. subPath must be a relativepath.

You can select automatically created or existing volumes. Before using existingvolumes, ensure that the corresponding file systems have been imported. Fordetails, see Importing SFS File Systems.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 66

Page 70: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

NO TICE

● Do not mount SFS volumes to a system directory, such as / or /var/run.Otherwise, the container becomes abnormal. You are advised to mount SFSvolumes to an empty directory. If the directory is not empty, ensure that thereare no files affecting container startup in the directory. Otherwise, such fileswill be replaced, resulting in failures to start the container and create theworkload.

● If SFS volumes need to be mounted to a high-risk directory, you are advised touse an account with minimum permissions to start the container. Otherwise,high-risk files on the host may be damaged.

Creating SFS Volumes Using kubectlFor details, see Using Persistent Storage.

6.5 SFS Turbo VolumesYou can create SFS Turbo file systems and mount them to containers. SFS Turbovolumes are fast, on-demand, and scalable. They are suitable for DevOps,containerized microservices, and enterprise office applications.

Currently, SFS Turbo volumes are unavailable in region CN East-Shanghai1.

Importing SFS Turbo File SystemsCCI allows you to import existing SFS Turbo file systems.

Step 1 Log in to the CCI console. In the navigation pane, choose Storage > SFS Turbo.On the page displayed on the right, select a namespace and click Import.

Step 2 Select one or more SFS Turbo file systems that you want to import, and clickImport.

If no SFS Turbo volumes are available, click create an SFS Turbo file system tocreate one.

After the SFS Turbo file system is created, go back to the Import SFS Turbo FileSystem page on the CCI console. Then, select the created SFS Turbo file systemand click Import.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 67

Page 71: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 6-8 Importing SFS Turbo file systems

Step 3 Specify the mount option for the SFS Turbo volume to ensure real-time dataaccess. If an SFS Turbo volume is mounted into more than one pod, there is adelay in pod metadata access due to local caching in pods.

You can set mount options for specific SFS Turbo volumes. Currently, only the noacmount option is supported. This option is used to disable local file and directorycaching, allowing pods to access data from the SFS Turbo volume in real time.

The mount option is valid only for SFS Turbo volumes created in the current namespace.

Figure 6-9 Setting the mount option for an SFS Turbo volume

----End

Using SFS Turbo VolumesAfter selecting a container image in 4.2 Deployment or Creating a Job, expandAdvanced Settings > Storage, click the SFS Turbo Volumes tab, and click AddSFS Turbo Volume.

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 68

Page 72: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 6-10 Adding an SFS Turbo volume

● During the creation of an SFS Turbo file system, an independent VM will be created,which takes a long time. Therefore, you are advised to select existing SFS Turbovolumes.

● subPath is a sub-directory in the root path of the SFS Turbo file system. If such a sub-directory does not exist, it is automatically created in the SFS Turbo file system. subPathmust be a relative path.

Unbinding SFS Turbo VolumesIf an imported SFS Turbo volume is no longer required, you can unbind it from theSFS Turbo file system. After being unbound, the SFS Turbo file system cannot beused for your workloads.

If an SFS Turbo volume has been mounted to a workload, it cannot be unbound from theSFS Turbo file system.

Step 1 Log in to the CCI console. In the navigation pane, choose Storage > SFS Turbo. Inthe file system list, click Unbind in the row where the target SFS Turbo volumeresides.

Step 2 Read the message that is displayed and click Yes.

----End

Cloud Container InstanceUser Guide 6 Storage Management

2020-04-14 69

Page 73: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

7 Configuration Management

7.1 ConfigMapsConfigMaps are a type of resource used to store the configurations required byapplications. After a ConfigMap is created, it can be used as a file in acontainerized application.

Creating ConfigMaps

Step 1 Log in to the CCI console. In the navigation pane, choose Configuration Center >ConfigMaps. On the page displayed on the right, select a namespace and clickCreate ConfigMap.

Step 2 Select a creation mode. CCI allows you to create a ConfigMap by manuallyspecifying parameters or uploading a file.● Method 1: manually specifying parameters. Configure parameters based on

the description in Table 7-1. Parameters marked with an asterisk (*) aremandatory.

Table 7-1 Parameter description

Parameter Description

Basic information

* Name Name of a ConfigMap.Enter 1 to 253 characters starting and ending with aletter or digit. Only lowercase letters, digits, hyphens(-), and periods (.) are allowed. Do not enter twoconsecutive periods or a period adjacent to a hyphen.

Description Description of the ConfigMap.

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 70

Page 74: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Parameter Description

Data Configuration data to be stored in the ConfigMap.Key indicates the file name and Value indicates thefile content.1. Click Add Data.2. Enter the key and the value.

Label Labels are attached to various objects (such asworkloads and services) in the form of key-valuepairs.Labels define the identifiable properties of theseobjects and are used to manage and select them.1. Click Add Label.2. Enter a key and a value.

● Method 2: uploading a file.

The file must be in JSON or YAML format, and the file size must be less than 1 MB.For details, see ConfigMap File Format.

Click Add File, select an existing ConfigMap resource file, and click Open.

Step 3 After the configuration is complete, click Create.

----End

Using ConfigMapsAfter a ConfigMap is created, mount it to the specified directory of the containerduring workload creation. As shown in the following figure, mount ConfigMap cci-configmap01 to the /tmp/configmap1 directory.

Figure 7-1 Using a ConfigMap

After the workload is created, a ConfigMap file will be created under /tmp/configmap1. The key of the ConfigMap indicates the file name, and the valueindicates the file content.

ConfigMap File FormatA ConfigMap resource file must be in JSON or YAML format, and the file sizecannot exceed 1 MB.

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 71

Page 75: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● JSON formatAn example of the configmap.json file is as follows:{ "kind": "ConfigMap", "apiVersion": "v1", "metadata": { "name": "test-configmap", "labels": { "label-01": "value-01", "label-02": "value-02" }, "annotations": { "description": "a test configmap" }, "enable": true }, "data": { "key-01": "value-01", "key-02": "value-02" }}

● YAML formatAn example of the configmap.yaml file is as follows:apiVersion: v1kind: ConfigMapmetadata: name: test-configmap labels: label-01: value-01 label-02: value-02 annotations: description: "a test configmap" enable: truedata: key-01: value-01 key-02: value-02

Creating a ConfigMap Using kubectlFor details, see ConfigMap.

7.2 SecretsSecrets are Kubernetes objects for storing sensitive data such as passwords,tokens, certificates, and private keys. A secret can be loaded to a container as anenvironment variable or a file when the container is started.

Secrets and SSL certificates share the same quota.

Creating Secrets

Step 1 Log in to the CCI console. In the navigation pane, choose Configuration Center >Secrets. On the page displayed on the right, select a namespace and click CreateSecret.

Step 2 Select a creation mode. CCI allows you to create a secret by manually specifyingparameters or uploading a file.

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 72

Page 76: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● Method 1: manually specifying parameters. Configure parameters based onthe description in Table 7-2. Parameters marked with an asterisk (*) aremandatory.

Table 7-2 Parameter description

Parameter Description

Basic information

* Name Name of a secret.Enter 1 to 253 characters starting and ending with aletter or digit. Only lowercase letters, digits, hyphens(-), and periods (.) are allowed. Do not enter twoconsecutive periods or a period adjacent to a hyphen.

Description Description of the secret.

* Data Secret data can be used in the container. Keyindicates the file name and Value indicates the filecontent.1. Click Add Data.2. Enter a key and a value. If you select Auto

transcoding, the value you entered will beautomatically encoded using Base64.

Label Labels are attached to various objects (such asapplications, nodes, and services) in the form of key-value pairs.Labels define the identifiable properties of theseobjects and are used to manage and select them.1. Click Add Label.2. Enter a key and a value.

● Method 2: uploading a file.

The file must be in JSON or YAML format, and the file size must be less than 2 MB.For details, see Secret File Format.

Click Add File, select an existing secret resource file, and click Open.

Step 3 After the configuration is complete, click Create.

The newly created secret is displayed in the secret list.

----End

Using Secrets

After a secret is created, it can be referenced as an environment variable ormounted to a container path during workload creation.

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 73

Page 77: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 7-2 Referencing a secret as an environment variable

Figure 7-3 Mounting a secret to a container path

Secret File Format● secret.yaml resource description file

For example, to obtain the following key-value pairs and encrypt them for anapplication, you can use the secret.key1: value1key2: value2The content in the secret file secret.yaml is as follows. Base64 encoding isrequired for the value. For details about the Base64 encoding method, seeBase64 Encoding.apiVersion: v1kind: Secretmetadata: name: mysecret #Secret name annotations: description: "test" labels: label-01: value-01 label-02: value-02data:key1: dmFsdWUx #Base64 encoding requiredkey2: dmFsdWUy #Base64 encoding requiredtype: Opaque #Must be Opaque

● secret.json resource description fileThe content in the secret file secret.json is as follows:{ "apiVersion": "v1", "kind": "Secret", "metadata": { "annotations": { "description": "test" }, "labels": { "label-01": "value-01", "label-02": "value-02" }, "name": "mysecret"

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 74

Page 78: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

}, "data": { "key1": "dmFsdWUx", "key2": "dmFsdWUy" }, "type": "Opaque"}

Base64 EncodingTo perform Base64 encoding on a character string, run the echo -n Content to beencoded | base64 command. The following is an example:

root@ubuntu:~# echo -n "3306" | base64MzMwNg==

Creating a Secret Using kubectlFor details, see Secret.

7.3 SSL CertificatesSecure Sockets Layer (SSL) is a security protocol designed to protect security anddata integrity for Internet communications.

You can upload an SSL certificate to CCI. In HTTPS access, CCI will automaticallyinstall it to the layer-7 load balancer for data transmission encryption.

Secrets and SSL certificates share the same quota.

SSL Certificate IntroductionAn SSL certificate indicates compliance with the SSL protocol. An SSL certificate isissued to a server by a trusted digital certificate authority (CA) after the CA hasverified the identity of the server. SSL certificates have the functions of serverauthentication and data transmission encryption. By installing an SSL certificate, aserver can encrypt the data transmitted between clients and the server, preventinginformation leak. In addition, the SSL certificate verifies whether the websitesvisited by the server are authentic and reliable.

SSL certificates are divided into authoritative certificates and self-signedcertificates. Authoritative certificates are issued by CAs. You can purchaseauthoritative certificates from third-party certificate agents. A client trustswebsites that use authoritative certificates by default. Self-signed certificates areissued by users themselves, usually using OpenSSL. Self-signed certificates areuntrusted by the client by default. The browser will display an alarm messagewhen you access a website that uses a self-signed certificate, but you can continuethe access by ignoring the alarm.

Application ScenariosBy installing an SSL certificate, a server can encrypt the data transmitted betweenclients and the server, preventing information leak. To enable secure publicnetwork access for a web application in CCI, set the workload access mode to

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 75

Page 79: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Internet access and the ELB protocol to HTTPS, and then select the certificate forInternet access during workload creation.

Adding a Certificate

Step 1 Log in to the CCI console. In the navigation pane, choose Configuration Center >SSL Certificates. On the page displayed on the right, select a namespace and clickAdd Certificate.

Step 2 Specify the name and description information of the SSL certificate.

Certificate name: Enter 1 to 253 characters starting and ending with a letter ordigit. Only lowercase letters, digits, hyphens (-), and periods (.) are allowed. Donot enter two consecutive periods or a period adjacent to a hyphen.

Step 3 Upload the certificate file and private key file.● .crt and .cer certificate files are supported, and the file size cannot exceed 1

MB. The file content must comply with the corresponding CRT or CERprotocol.

● .key and .pem private key files are supported, and the file size cannot exceed1 MB. Private keys cannot be encrypted.

Figure 7-4 Uploading SSL certificate files

Step 4 Click Add.

----End

Using an SSL CertificateWhen the service has public network access, you can use the SSL certificate andset the ELB protocol to the HTTPS protocol.

During workload creation, set the workload access mode to Internet access andthe ELB protocol to HTTP/HTTPS, and select the SSL certificate. The SSL certificatewill be automatically installed on the ELB to encrypt data for transmission.

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 76

Page 80: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 7-5 Using an SSL certificate

After the workload is created, CCI will create a certificate for the load balancerand name the certificate after the workload. If a certificate with a name startingwith beethoveen-cci-ingress is created on CCI, do not delete or update it.Otherwise, an access exception may occur.

Updating and Deleting an SSL Certificate● A certificate can be updated before it expires, and the workload using the

certificate will update it synchronously.● Do not delete a certificate that is being used by a workload. Otherwise, the

workload may be inaccessible.

Cloud Container InstanceUser Guide 7 Configuration Management

2020-04-14 77

Page 81: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

8 Log Management

CCI supports the mounting of the log storage volume for log collection. To writelogs to the log storage volume, you only need to add the log storage volumewhen you create a workload.

CCI is interconnected with Application Operations Management (AOM). AOMcollects the .log files in container log storage and dumps them in AOM tofacilitate viewing and retrieval.

Adding a Log Storage VolumeYou can add a log storage volume for a container when creating a workload.

● Log path in the container: path for mounting the log storage to the container.The log output path of the application must be the same as this path so thatlogs can be written to the log storage volume.

NO TICE

1. After the log storage volume is mounted, the existing content in the logpath will be overwritten. Ensure that the log path is an independent path;otherwise, the previous content will be invisible.

2. AOM collects only .log, .trace, and .out files in the log path.3. In addition, AOM can collect a maximum of 20 log files. Therefore, your

logs can be exported to a maximum of 20 files in the log path. Otherwise,the logs cannot be dumped to AOM.

4. AOM scans log files every minute. When a log file exceeds 50 MB, it isdumped immediately. A new .zip file is generated in the directory wherethe log file is located. AOM stores only the latest 20 .zip files. When thenumber of .zip files exceeds 20, earlier .zip files are deleted. After a log fileis dumped, AOM clears the log file.

● Log storage space: Space of log storage.

Cloud Container InstanceUser Guide 8 Log Management

2020-04-14 78

Page 82: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

NO TICE

AOM provides each account 500 MB log storage space for free each month.AOM bills extra space on a pay-per-use basis. For details, see Pricing Details.

Figure 8-1 Using the log storage volume

Viewing LogsAfter the workload is created, you can view container logs.

Click the workload, and click View Logs in the same row as the containerinstance.

Figure 8-2 Viewing logs

You can view the logs of the corresponding container on the AOM console. For thelog query method in AOM, see Viewing Log Files.

Cloud Container InstanceUser Guide 8 Log Management

2020-04-14 79

Page 83: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

9 Add-on Management

Except the necessary support components, other Kubernetes components run asadd-ons, such as Kubernetes DNS and Kubernetes Dashboard.

Add-ons are extension of the existing features. CCI provides the coredns add-onfor users. You can directly install the add-on and conveniently use the functionsprovided by the add-on.

coredns

The coredns add-on provides the internal domain name resolution service for yourother workloads. You are advised not to delete or upgrade this workload;otherwise, the internal domain name resolution service becomes unavailable.

Installing an Add-on

Step 1 Log in to the CCI console. In the navigation pane, choose Add-ons > Add-on

Marketplace. Then, click on the card of the add-on you want to install.

Figure 9-1 coredns add-on

Step 2 Select a version from the Add-on Version drop-down list, and click Submit.

When installing coredns v2.5.9 or later, you must also configure the followingparameters:

Cloud Container InstanceUser Guide 9 Add-on Management

2020-04-14 80

Page 84: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● Stub Domain: A DNS server that resolves user-defined domain names. Thestub domain contains the suffix of the DNS domain name followed by one ormore DNS IP addresses. For example, acme.local -- 1.2.3.4,6.7.8.9 means thatDNS requests with the .acme.local suffix are forwarded to a DNS listening at1.2.3.4,6.7.8.9.

● Upstream DNS Server: A DNS server that resolves all domain names exceptintra-cluster service domain names and user-defined domain names. Thevalue can be one or more DNS IP addresses, for example, 8.8.8.8,8.8.4.4.

After the installation is complete, you can see the installed add-on in Add-ons >Add-on Instances.

Figure 9-2 coredns installed

----End

Configuring Stub Domains for corednsCluster administrators can modify the ConfigMap for the CoreDNS Corefile tochange how service discovery works. They can configure stub domains for corednsusing the proxy plug-in.

Assume that a cluster administrator has a Consul DNS server located at 10.150.0.1and all Consul domain names have the suffix .consul.local. To configure thisConsul DNS server in coredns, the cluster administrator needs to write thefollowing information in the coredns ConfigMap:

consul.local:5353 { errors cache 30 proxy . 10.150.0.1 }

ConfigMap after modification:

apiVersion: v1data: Corefile: |- .:5353 { cache 30 errors

Cloud Container InstanceUser Guide 9 Add-on Management

2020-04-14 81

Page 85: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream /etc/resolv.conf fallthrough in-addr.arpa ip6.arpa } loadbalance round_robin prometheus 0.0.0.0:9153 proxy . /etc/resolv.conf reload }

consul.local:5353 { errors cache 30 proxy . 10.150.0.1 }kind: ConfigMapmetadata: name: coredns namespace: kube-system

How Does Domain Name Resolution Work in Kubernetes?

DNS policies can be set on a per-pod basis. Currently, Kubernetes supports fourtypes of DNS policies: Default, ClusterFirst, ClusterFirstWithHostNet, and None.For details, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/. These policies are specified in the dnsPolicy field in the pod-specific.

● Default: Pods inherit the name resolution configuration from the node thatruns the pods. The custom upstream DNS server and the stub domain cannotbe used together with this policy.

● ClusterFirst: Any DNS query that does not match the configured clusterdomain suffix, such as www.kubernetes.io, is forwarded to the upstreamname server inherited from the node. Cluster administrators may have extrastub domains and upstream DNS servers configured.

● ClusterFirstWithHostNet: For pods running with hostNetwork, set its DNSpolicy ClusterFirstWithHostNet.

● None: It allows a pod to ignore DNS settings from the Kubernetesenvironment. All DNS settings are supposed to be provided using thednsPolicy field in the pod-specific.

● Clusters of Kubernetes v1.10 and later support Default, ClusterFirst,ClusterFirstWithHostNet, and None. Clusters earlier than Kubernetes v1.10 supportonly Default, ClusterFirst, and ClusterFirstWithHostNet.

● Default is not the default DNS policy. If dnsPolicy is not explicitly specified, ClusterFirstis used.

Routing

● Without stub domain configurations: Any query that does not match theconfigured cluster domain suffix, such as www.kubernetes.io, is forwarded tothe upstream DNS server inherited from the node.

● With stub domain configurations: If stub domains and upstream DNS serversare configured, DNS queries are routed according to the following flow:

a. The query is first sent to the DNS caching layer in coredns.

Cloud Container InstanceUser Guide 9 Add-on Management

2020-04-14 82

Page 86: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

b. From the caching layer, the suffix of the request is examined and thenforwarded to the appropriate DNS, based on the following cases:

▪ Names with the cluster suffix, for example, .cluster.local: The requestis sent to coredns.

▪ Names with the stub domain suffix, for example, .acme.local: Therequest is sent to the configured custom DNS resolver, listening forexample at 1.2.3.4.

▪ Names that do not match the suffix (for example, widget.com): Therequest is forwarded to the upstream DNS.

Figure 9-3 Routing

Follow-Up OperationsAfter the add-on is installed, you can perform the following operations on theadd-on.

Table 9-1 Other operations

Operation Description

UpgradeClick . Select the target version, and click Next. Then, confirmthe new configuration information, and click Submit.

Rollback Click . Then, select the version to which the add-on is to berolled back, and click Submit.

Deletion Click and then click Confirm.NOTICE

Deleted add-on cannot be recovered. Exercise caution when performingthis operation.

Cloud Container InstanceUser Guide 9 Add-on Management

2020-04-14 83

Page 87: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

10 Auditing

10.1 CCI Operations Supported by CTSCloud Trace Service (CTS) records operations on cloud service resources, allowingyou to query, audit, and backtrack the resource operation requests initiated fromthe CCI console or open APIs as well as responses to the requests.

Table 10-1 CCI operations that can be recorded by CTS

Operation Trace Name

Creating a service createService

Deleting a service deleteService

Deleting all services under a specifiednamespace

deleteServicesByNamespace

Replacing a service replaceService

Updating a service updateService

Deleting an Endpoints deleteEndpoint

Deleting all Endpoints under a specifiednamespace

deleteEndpointsByNamespace

Replacing an Endpoints under aspecified namespace

replaceEndpoint

Updating an Endpoints under aspecified namespace

updateEndpoint

Creating a Deployment createDeployment

Deleting a Deployment deleteDeployment

Deleting all Deployments under aspecified namespace

deleteDeploymentsByNamespace

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 84

Page 88: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Operation Trace Name

Replacing a Deployment under aspecified namespace

replaceDeployment

Updating a Deployment under aspecified namespace

updateDeployment

Creating a StatefulSet createStatefulset

Deleting a StatefulSet deleteStatefulset

Deleting all StatefulSets under aspecified namespace

deleteStatefulsetsByNamespace

Replacing a StatefulSet under aspecified namespace

replaceStatefulset

Updating a StatefulSet under aspecified namespace

updateStatefulset

Creating a job createJob

Deleting a job deleteJob

Deleting all jobs under a specifiednamespace

deleteJobsByNamespace

Replacing the status of a job under aspecified namespace

replaceJob

Updating the status of a job under aspecified namespace

updateJob

Creating a cron job createCronjob

Deleting a cron job deleteCronjob

Deleting all cron jobs under a specifiednamespace

deleteCronjobsByNamespace

Replacing the status of a cron job undera specified namespace

replaceCronjob

Updating the status of a cron job undera specified namespace

updateCronjob

Creating an ingress createIngress

Deleting an ingress deleteIngress

Deleting all ingresses under a specifiednamespace

deleteIngressesByNamespace

Replacing an ingress under a specifiednamespace

replaceIngress

Updating the status of an ingress undera specified namespace

updateIngress

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 85

Page 89: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Operation Trace Name

Creating a namespace createNamespace

Deleting a namespace deleteNamespace

Creating a pod createPod

Updating a pod updatePod

Replacing a pod replacePod

Deleting a pod deletePod

Deleting all pods under a specifiednamespace

deletePodsByNamespace

Deleting an event deleteEvent

Creating a ConfigMap createConfigmap

Updating a ConfigMap updateConfigmap

Replacing a ConfigMap replaceConfigmap

Deleting a ConfigMap deleteConfigmap

Deleting all ConfigMaps under aspecified namespace

deleteConfigmapsByNamespace

Creating a secret createSecret

Updating a secret updateSecret

Replacing a secret replaceSecret

Deleting a secret deleteSecret

Deleting all secrets under a specifiednamespace

deleteSecretsByNamespace

Deleting a network deleteNetwork

Creating a network createNetwork

Deleting all networks under a specifiednamespace

deleteNetworksByNamespace

Updating a network updateNetwork

Replacing a network replaceNetwork

Creating a network attachmentdefinition

createNetworkAttachmentDefinition

Deleting all network attachmentdefinitions under a specified namespace

deleteNetworkAttachmentDefini-tionsByNamespace

Deleting a network attachmentdefinition

deleteNetworkAttachmentDefinition

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 86

Page 90: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Operation Trace Name

Creating a PV createPersistentvolume

Deleting all PVs under a specifiednamespace

deletePersistentvolumesByName-space

Replacing a PV replacePersistentvolume

Updating a PV updatePersistentvolume

Deleting a PV deletePersistentvolume

Creating a PVC createPersistentvolumeclaim

Importing an existing PVC createPersistentvolumeclaimByStora-geInfo

Deleting all PVCs under a specifiednamespace

deletePersistentvolumeclaimsByNa-mespace

Replacing a PVC replacePersistentvolumeclaim

Updating a PVC updatePersistentvolumeclaim

Deleting a PVC deletePersistentvolumeclaim

Buying a package createPackageproduct

Buying a promotion package createActiveproduct

Creating a Kubeflow job createKubeflowJob

Deleting all Kubeflow jobs under aspecified namespace

deleteKubeflowJobsByNamespace

Replacing a Kubeflow job replaceKubeflowJob

Updating a Kubeflow job updateKubeflowJob

Deleting a Kubeflow job deleteKubeflowJob

Creating a Volcano job createVolcanoJob

Deleting all Volcano jobs under aspecified namespace

deleteVolcanoJobsByNamespace

Replacing a Volcano job replaceVolcanoJob

Updating a Volcano job updateVolcanoJob

Deleting a Volcano job deleteVolcanoJob

Creating an agency createAgency

Updating a quota modifyQuota

Creating an ImageCache createImagecache

Deleting an ImageCache deleteImagecache

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 87

Page 91: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Operation Trace Name

Replacing an ImageCache replaceImagecache

Updating an ImageCache updateImagecache

Uploading a chart createChart

Updating a chart updateChart

Deleting a chart deleteChart

Uploading an add-on createAddon

Updating an add-on updateAddon

Deleting an add-on deleteAddon

Creating a release createRelease

Updating a release updateRelease

Deleting a release deleteRelease

Creating an add-on instance createAddonInstance

Updating an add-on instance updateAddonInstance

Deleting an add-on instance deleteAddonInstance

Creating an add-on readme createAddonReadme

Deleting an add-on readme deleteAddonReadme

10.2 Viewing Logs in CTSAfter you enable Cloud Trace Service (CTS), CTS starts recording operations on CCIresources. CTS stores operation records of the last seven days.

ScenariosAfter you enable CTS, CTS starts recording operations on CCI resources. You canview operation records of the last seven days on the CTS console.

Procedure

Step 1 Log in to the management console.

Step 2 Click in the upper left corner and select a region.

Step 3 Click Service List, and choose Management & Deployment > Cloud TraceService.

Step 4 In the navigation pane, choose Trace List.

Step 5 Specify the filters used for querying traces. The following filters are available:

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 88

Page 92: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

● Trace Type, Trace Source, Resource Type, and Search BySelect the desired filter criterion from the drop-down lists. Select CCI from theTrace Source drop-down list.If you select Trace name for Search By, you also need to select a trace name.If you select Resource ID for Search By, you also need to select or enter aresource ID.If you select Resource name for Search By, you also need to select or enter aresource name.

● Operator: Select a specific operator (at user level rather than account level).● Trace Status: Select one of All trace statuses, Normal, Warning, and

Incident.● Start Date and End Date: You can specify a time period to query traces.

Step 6 Click on the left of a trace to expand its details, as shown in Figure 10-1.

Figure 10-1 Expanding trace details

Step 7 Click View Trace in the Operation column. In the dialog box shown in Figure10-2, the trace structure details are displayed.

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 89

Page 93: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 10-2 Viewing trace details

----End

Cloud Container InstanceUser Guide 10 Auditing

2020-04-14 90

Page 94: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

11 Security Vulnerability Responses

11.1 Notice on Fixing Linux Kernel SACK VulnerabilitiesThe HUAWEI CLOUD CCI team has provided a solution to fixing the SACKvulnerabilities in Linux kernel at 00:00 on July 11.

● Pods that are not associated with an ELB or EIP are not affected by thesevulnerabilities because they are not exposed to the public network. Therefore,no action is required.

● Deployments that were created after 00:00 on July 11 are not affected bythese vulnerabilities. However, you are advised to recreate pods in theDeployments that were created before 00:00 on July 11 during off-peak hours.For details, see Solution.

● After the current job or cron job completes, pods created by the next job orcron job will not be affected by these vulnerabilities. Therefore, no action isrequired.

● The coredns add-on is not affected by these vulnerabilities. Therefore, noaction is required.

Vulnerability DetailsOn June 18, 2019, Red Hat released a security notice, stating that the TCP SACKmodule of the Linux kernel is exposed to three security vulnerabilities(CVE-2019-11477, CVE-2019-11478, and CVE-2019-11479). These vulnerabilitiesare related to the maximum segment size (MSS) and TCP SelectiveAcknowledgment (SACK) packets. Remote attackers can exploit thesevulnerabilities to trigger a denial of service (DoS), resulting in server unavailabilityor breakdown.

Reference links:

https://www.suse.com/support/kb/doc/?id=7023928

https://access.redhat.com/security/vulnerabilities/tcpsack

https://www.debian.org/lts/security/2019/dla-1823

https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SACKPanic?

Cloud Container InstanceUser Guide 11 Security Vulnerability Responses

2020-04-14 91

Page 95: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

https://lists.centos.org/pipermail/centos-announce/2019-June/023332.html

https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-001.md

Table 11-1 Vulnerability information

VulnerabilityType

CVE-ID Published Fixed

Input validationflaw

CVE-2019-11477 2019-06-17 2019-07-11

Resourcemanagement flaw

CVE-2019-11478 2019-06-17 2019-07-11

Resourcemanagement flaw

CVE-2019-11479 2019-06-17 2019-07-11

Affected Products

Linux kernel version 2.6.29 and later

Solution

During off-peak hours, delete and recreate pods in the Deployments that werecreated before 00:00 on July 11.

Step 1 Log in to the CCI console. In the navigation pane, choose Workloads >Deployments. On the page displayed on the right, click a Deployment name.

Step 2 In the Pod List area on the Deployment details page, click Delete in the rowwhere the pod resides. In the dialog box that is displayed, click Yes.

Figure 11-1 Deleting a pod

After the pod is deleted, the Deployment automatically creates new pods, asshown in Figure 11-2.

Cloud Container InstanceUser Guide 11 Security Vulnerability Responses

2020-04-14 92

Page 96: User Guide - HUAWEI CLOUD · 2020-04-14 · 10.2 Viewing Logs in CTS ... click Buy Enhanced Load Balancer. Specify the required parameters to create a load balancer. Load balancers

Figure 11-2 Automatically creating pods

NO TICE

If there are multiple pods in a Deployment, delete them one by one. In otherwords, delete the next pod only after the previous pod is successfully recreated, toavoid service interruption.

----End

Appendix: Introduction to TCP SACKs

TCP is a connection oriented protocol. When two parties wish to communicateover a TCP connection, they establish a connection by exchanging certaininformation such as requesting to initiate (SYN) a connection, initial sequencenumber, acknowledgement number, maximum segment size (MSS) to use overthis connection, and permission to send and process Selective Acknowledgements(SACKs). This connection establishment process is known as 3-way handshake.

TCP sends and receives user data by a unit called Segment. A TCP segmentconsists of TCP Header, Options and user data. Each TCP segment has a SequenceNumber (SEQ) and Acknowledgement Number (ACK).

These SEQ & ACK numbers are used to track which segments are successfullyreceived by the receiver. ACK number indicates the next expected segment by thereceiver.

Example:

User A sends 1 kilobyte of data through 13 segments of 100 bytes each. There are13 segments in total because each segment has TCP header of 20 bytes. On thereceiving end, user B receives segments 1, 2, 4, 6, and 8-13. Segments 3, 5, and 7are lost, not received by user B.

By using ACK numbers, user B will indicate that it is expecting segment 3, whichuser A reads as none of the segments after 2 were received by user B. Then user Awill retransmit all the segments from 3 onwards, even though segments 4, 6, and8-13 were successfully received by user B. User B has no way to indicate that touser A. This leads to an inefficient usage of the network.

Cloud Container InstanceUser Guide 11 Security Vulnerability Responses

2020-04-14 93