soscon2017 from kubernetes to openstack 171026 · 5 demo system deploy node k1-master01 k1-master02...
Post on 20-Jul-2020
7 Views
Preview:
TRANSCRIPT
October2017
Seungkyu Ahn
From Kubernetes to OpenStack
2
Index
§ Why Kubernetes?
§ Software stack
§ Kubespray
§ Kolla
§ Helm
§ OpenStack-Helm
3
Why Kubernetes?
§ Automatic binpacking (Managing container)
§ Horizontal scaling
§ Automated rollouts and rollbacks
§ Self-healing
§ Service discovery and load balancing
§ Secret and configuration management
Software stack
Chart
Kubespray
5
Demo System
deploy node
k1-master01 k1-master02 k1-master03
k1-node01 k1-node02 k1-node03
k1-node04
Label : openstack-control-plane=enabledopenvswitch=enabled
Label : openstack-compute-node=enabledopenvswitch=enabled
6
Kubespray
• Kubernetes incubator project
• Ansible
• Latest version support
ü Kubernetes: v1.8.0
ü Calico: v2.5.0 or Flannel: v0.8.0 or Weave: 2.0.1
ü Helm: v2.6.1
ü efk: v5.4.0, 1.22, v5.4.0
• Added features in TACO (SKT All Container OpenStack)
ü CI / CD
ü Prometheus for monitoring
7
Kubespray
• Should be changed files
ü inventory/inventory.example
ü inventory/group_vars/k8s-cluster.yml
• Install Kubernetesü ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
8
Kubespray
• Should be changed files
ü inventory/inventory.example
ü inventory/group_vars/k8s-cluster.yml
• Install Kubernetesü ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
9
Inventory example
k1-master01 ansible_port=22 ansible_host=k1-master01 ip=192.168.30.13k1-master02 ansible_port=22 ansible_host=k1-master02 ip=192.168.30.14k1-master03 ansible_port=22 ansible_host=k1-master03 ip=192.168.30.15k1-node01 ansible_port=22 ansible_host=k1-node01 ip=192.168.30.12k1-node02 ansible_port=22 ansible_host=k1-node02 ip=192.168.30.17k1-node03 ansible_port=22 ansible_host=k1-node03 ip=192.168.30.18k1-node04 ansible_port=22 ansible_host=k1-node04 ip=192.168.30.21
[etcd]k1-master01k1-master02k1-master03
[kube-master]k1-master01k1-master02k1-master03
[kube-node]k1-node01k1-node02k1-node03k1-node04
[k8s-cluster:children]kube-masterkube-node
10
Kubespray
• Should be changed files
ü inventory/inventory.example
ü inventory/group_vars/k8s-cluster.yml
• Install Kubernetesü ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
11
k8s-cluster.yml example
kube_version: v1.8.0kube_network_plugin: calicokube_service_addresses: 10.96.0.0/16kube_pods_subnet: 172.16.0.0/16etcd_deployment_type: dockerkubelet_deployment_type: hostetcd_memory_limit: 8192Mdashboard_enabled: trueefk_enabled: truehelm_enabled: true
12
Kubespray
• Should be changed files
ü inventory/inventory.example
ü inventory/group_vars/k8s-cluster.yml
• Install Kubernetesü ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
13
Kubernetes term
[ Pod ]• 컨테이너를 담고 있는 그릇 (여러 개의 컨테이너가 포함될 수 있음)
• 같은 Pods 안에서의 여러 컨테이너가 같은 네트워크 네임스페이스와 ip 를 가짐
(Apache -> (localhost, port) -> Tomcat)
• 같은 Pods 안에서의 여러 컨테이너가 같은 볼륨을 본다.
[ Replica Set ]• Pod의 갯수를 관리
[ Deployment ]• Pod 와 Replica Set 을 통합하여 배포할 수 있는 단위
• 배포 히스토리를 버전별로 관리
14
Kubernetes term
[ ConfigMap and Secret ]• ConfigMap : Application 의 Configuration, 혹은 shell script
• Secret : 보안 값
[ Service ]• Route to pod (using labels) – 내부 IP로 Pod 에 대한 Load Balancing (기본기능)
• 외부에서 접근할려면 아래 두 타입을 활용하여 가능
• 타입 : Load balancer (GCE, AWS, OpenStack), NodePort (iptables)
15
Kubernetes manifest example – nginx deployment
apiVersion: apps/v1beta1kind: Deploymentmetadata:name: nginx-deploymentnamespace: default
spec:replicas: 3template:metadata:labels:app: nginx
spec:containers:- name: nginximage: nginx:1.7.9ports:- containerPort: 80
16
Storage - PV and PVC (w/ Ceph)
• Secret files (openstack namespace) - user
ü ceph-secret-user.yml
• Storage class
ü ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
ü ceph-secret-admin.yml
ü ceph-secret-user.yml
17
Storage - PV and PVC (w/ Ceph)
• Secret files (openstack namespace) - user
ü ceph-secret-user.yml
• Storage class
ü ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
ü ceph-secret-admin.yml
ü ceph-secret-user.yml
18
Secret file - ceph-secret-admin.yml
apiVersion: v1kind: Secretmetadata:
name: "ceph-secret-admin"namespace: "kube-system"
type: "kubernetes.io/rbd"data:
key: ”xxxxxxx=="
grep key /etc/ceph/ceph.client.admin.keyring | awk '{printf "%s", $NF}’ | base64
19
Storage - PV and PVC (w/ Ceph)
• Secret files (openstack namespace) - user
ü ceph-secret-user.yml
• Storage class
ü ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
ü ceph-secret-admin.yml
ü ceph-secret-user.yml
20
Secret file - ceph-secret-user.yml
apiVersion: v1kind: Secretmetadata:
name: "ceph-secret-user"namespace: "kube-system"
type: "kubernetes.io/rbd"data:
key: ”xxxxxx=="
grep key /etc/ceph/ceph.client.kube.keyring | awk '{printf "%s", $NF}’ | base64
21
• Secret files (openstack namespace) - user
ü ceph-secret-user.yml
• Storage class
ü ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
ü ceph-secret-admin.yml
ü ceph-secret-user.yml
Storage - PV and PVC (w/ Ceph)
22
Storage class file - ceph-storageclass.yml
apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata:name: "ceph"annotations:storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbdparameters:monitors: "192.168.30.23:6789,192.168.30.24:6789,192.168.30.25:6789"adminId: "admin"adminSecretName: "ceph-secret-admin"adminSecretNamespace: "kube-system"pool: "kube"userId: "kube"userSecretName: "ceph-secret-user"
23
• Secret files (openstack namespace) - user
ü ceph-secret-user.yml
• Storage class
ü ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
ü ceph-secret-admin.yml
ü ceph-secret-user.yml
Storage - PV and PVC (w/ Ceph)
24
Secret file - ceph-secret-user.yml
apiVersion: v1kind: Secretmetadata:
name: "ceph-secret-user"namespace: ”openstack"
type: "kubernetes.io/rbd"data:
key: ”xxxxxx=="
grep key /etc/ceph/ceph.client.kube.keyring | awk '{printf "%s", $NF}’ | base64
25
Label
kubectl label node k1-node01 openstack-control-plane=enabled
kubectl label node k1-node01 openvswitch=enabled
kubectl label node k1-node02 openstack-control-plane=enabled
kubectl label node k1-node02 openvswitch=enabled
kubectl label node k1-node03 openstack-control-plane=enabled
kubectl label node k1-node03 openvswitch=enabled
kubectl label node k1-node04 openstack-compute-node=enabled
kubectl label node k1-node04 openvswitch=enabled
26
Kolla
● OpenStack project 로 OpenStack service 들의 docker image 를 생성 및 관리하는 Tool● OpenStack 서비스들 뿐만 아니라 다양한 관련 application들의 docker image 제공
27
Kolla Dockerfile.j2
28
Kolla build
• 원래 kolla에서 가지고 있는 각 dockerfile 에 custom하게 추가할 내용을 Wrapper 에서 추가함
• 두 내용이 merge되어 최종적으로 docker image가 build됨
• kolla-build -b ubuntu -t source --template-override template-overrides.j2 keystone
override
template-override.j2
29
Helm
Helm helps you manage Kubernetes applications . Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.
• Helm has two parts: a client (helm) and a server (tiller)• Charts are Helm packages that contain at least two things:
• A description of the package (Chart.yaml)• One or more templates, which contain Kubernetes manifest files
• Charts can be stored on disk, or fetched from remote chart repositories (like Debian or RedHat packages)
port-forwarding& gRPC in k8s
Helm chart structure
Kubernetes manifestkind: Deploymentmetadata:name: minio
labels:app: minio
spec:replicas: 1template:metadata:labels:app: minio
spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: cicd-servicesoperator: Invalues:- enabled
containers:- name: minioimage: minio/minio:latestimagePullPolicy: Alwaysargs:- server- /storage
Helm chart template
kind: Deploymentmetadata:
name: {{ template "minio.fullname" . }}labels:
app: {{ template "minio.fullname" . }}chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"release: "{{ .Release.Name }}"heritage: "{{ .Release.Service }}"
spec:{{- if eq .Values.mode "shared" }}replicas: {{ .Values.replicas }}{{- end }}
…containers:
- name: minioimage: {{ .Values.image }}:{{ .Values.imageTag }}imagePullPolicy: {{ .Values.imagePullPolicy }}args:- server- /storage
Helm chart value
replicas: 1
image: "minio/minio:latest"
imagePullPolicy: "Always"
34
OpenStack-Helm
• 2016년 11월 AT&T 에서 시작된 프로젝트이며 SKT 가 같이 참여하여 개발 중임.
(https://github.com/openstack/openstack-helm)
• Openstack service들을 배포하고 관리 및 업그레이드할 수 있도록 만든 chart들의 집합임.
35
Keystone chart structure
Container 배포관련 주요 설정
Test
Parameter
36
Keystone chart template
apiVersion: apps/v1beta1kind: Deploymentmetadata:name: keystone-api
spec:replicas: {{ .Values.pod.replicas.api }}template:metadata:labels:
{{ tuple $envAll "keystone" "api" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }}
spec:affinity:
{{ tuple $envAll "keystone" "api" | include "helm-toolkit.snippets.kubernetes_pod_anti_affinity" | indent 8 }}
nodeSelector:{{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
terminationGracePeriodSeconds: {{ .Values.pod.lifecycle.termination_grace_period.api.timeout | default "30" }}
initContainers:{{ tuple $envAll $dependencies $mounts_keystone_api_init | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }}
containers:- name: keystone-apiimage: {{ .Values.images.api }}imagePullPolicy: {{ .Values.images.pull_policy }}
37
Keystone values.yaml
38
Rally test
• Rally Test framework을 이용하여 각 chart로 배포된 서비스에 대한 시나리오 테스트 수행
39
Tempest test
• Build된 docker image와 helm chart를 사용하여 전체 OpenStack Service를 배포
• 별도의 tempest container를 띄워 API 테스트를 수행
Q&A
top related