artem zhurbila - docker clusters (solit 2015)
TRANSCRIPT
Agenda1. Base concepts of cluster
management and docker2. Docker Swarm3. Amazon EC2 Container Service4. Kubernetes5. Mesosphere
2
6
Docker is awesome on a single host, but ....● Single point of failure
■ high availability● Limited resources (CPU, RAM)
■ scalability
Docker cluster components● Resource and node container manager● Scheduler● Service discovery (consul, etcd, zookeeper, DNS + srv)● Overlay network (flannel, weave, socketplane)
8
Docker cluster management tools1. Docker Swarm
2. Amazon EC2 Container Service (ECS)3. Kubernetes (k8s)
4. Mesosphere
9
Swarm / scheduling strategies1. BinPacking - CPU and
RAM available and will return the node the most packed already
2. Random
13
Swarm / scheduling filters14
1. Constrainta. key/value - support glob and regexpb. dockerinfo
2. Affinitya. containersb. images
3. Dependencya. Shared volumes (--volumes-from)b. Links (--link)c. Shared network stack (--net)
4. Port5. Health
Swarm / service discoveryProviders:1. token (docker hub service)2. file3. etcd4. consul5. zookeeper
15
Setup Swarm cluster manually1 step: install >= 1.4.0 docker2 step: change /etc/default/docker file to listen tcp
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock"
3 step: create certificates and configure TLS (optional)4 step: docker pull swarm5 step: docker run --rm swarm create
generate unique cluster_id for using docker hub discovery service
6 step: docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>run this command on all hosts
7 step: docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>start the Swarm Master
8 step: export DOCKER_HOST=tcp://<swarm_ip>:<swarm_port>9 step: use your usual docker commands :-)
16
#1 Setup cluster on AWS by Docker Machine 1 step: download Docker-machine and add it to PATH
https://docs.docker.com/machine/#installation2 step: run command to create Swarm Masterdocker-machine create -d amazonec2 --swarm --swarm-master \--swarm-discovery=token://<generated_cluster_id> \--amazonec2-access-key=***** \--amazonec2-ami=ami-823686f5 \--amazonec2-instance-type=t2.micro \--amazonec2-region=eu-west-1 \--amazonec2-root-size=10 \--amazonec2-secret-key=***** \--amazonec2-security-group=my \--amazonec2-vpc-id=default \swarm-master
17
#2 Setup cluster on AWS by Docker Machine 3 step: run command (like in step 2 but without --swarm-master key) to create Swarm Slavedocker-machine create -d amazonec2 --swarm \--swarm-discovery=token://<generated_cluster_id> \….swarm-slave-01
4 step: export DOCKER_HOST=tcp://<swarm_ip>:<swarm_port>
5 step: use your usual docker commands or Docker-compose :-)
18
Swarm / conclusion+ standard Docker API
+ extremely easy to get started- many features are not implemented “yet” (multi-master, multi-host network, failover)
DOCKER MACHINE + SWARM + COMPOSE=
19
Amazon EC2 Container Service (preview)ECS is available in the US East (N. Virginia) and the US
West (Oregon) region during the preview.
20
ECS key conceptsCluster - a logical grouping of container instancesContainer Instance - EC2 instance that is running the ECS agent and has been registered into a cluster.Task Definition - a description of an application (json) - lists of containers grouped together.Task - task definition that is running on a container instance.
21
#1 Setup ECS clusterstep 1: create IAM role that allows EC2 use ECS service.step 2: install awscli > 1.7 step 3: change region in ~/.aws/config[default]output = jsonregion = us-east-1
22
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecs:CreateCluster", "ecs:RegisterContainerInstance", "ecs:DeregisterContainerInstance", "ecs:DiscoverPollEndpoint", "ecs:Submit*", "ecs:Poll" ], "Resource": [ "*" ] } ]}
#2 Setup ECS clusterstep 3: run command to create cluster (Your account is limited to 2 clusters):aws ecs create-cluster --cluster-name MyClusterstep 4: create user data script init_script.sh#!/bin/bashecho ECS_CLUSTER=MyCluster >> /etc/ecs/ecs.config
step 5: create 3 EC2 instances in clusteraws ec2 run-instances --image-id ami-801544e8 --count 3 --instance-type t2.micro --key-name <public_key> --security-groups <sec_group> --user-data file://init_script.sh --iam-instance-profile Name=<IAM role name>
23
#3 Setup ECS clusterstep 6: Create task definition: - nginx_def.jsonstep 7: register definition:aws ecs register-task-definition --cli-input-json file://nginx_def.jsonstep 8: run a task:aws ecs run-task --cluster MyCluster --task-definition test_nginx
25{ "containerDefinitions": [ { "name": "nginx", "image": "nginx", "cpu": 200, "memory": 100, "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "essential": true } ], "family": "test_nginx"}
If we stop this EC2 instance, task with nginx container will be resheduled (failover) to another hosts in cluster!
26
EC2 Container Service / conclusion
+ Use ECS we don’t need to administrate Master nodes. High availability of ECS is responsibility of AWS engineers.
- I have not found how to integrate with ELB, Autoscale and other Amazon services (may be it’s under development now)
27
Kubernetes (k8s) key conceptsNode - worker machine in Kubernetes (previously known as Minion)Pod - the smallest unit - colocated group of Docker containers. Label - key-value tagReplication controller - ensures that a specified number of pod "replicas" are running at any one timeService - provide a single, stable name and address for a set of pods. They act as basic load balancers.
29
#1 Kubernetes / Setup on AWSstep 1: install aws cli and k8sstep 2: check your aws creds in ~/.aws/credentials step 3: add env vars:export PATH=$PATH:<path_to_untar_k8s_directory>/platforms/<os>/<platform>export PATH=$PATH:<path_to_untar_k8s_directory>/clusterexport KUBERNETES_PROVIDER=aws
step 4: create ‘kubernetes’ IAM role with EC2FullAccess
32
#2 Kubernetes / Setup on AWSstep 5: up the cluster (it takes about 5 minutes) kube-up.sh
- script will provision a new VPC, 1 master and 4 node (minions) in us-west-2 (Oregon).- create a keypair called "kubernetes" as well as reuse an IAM role also called "kubernetes"- create S3 bucket ‘kubernetes-staging-***’ and upload Salt provision scripts- create CAFile, CertFile, KeyFile on your local computer
At the end of the script execution you see the URL of k8s master
33
#3 Kubernetes / Setup on AWSstep 6: export KUBERNETES_MASTER=https://<generated_url_from_step_5>
Now cluster is ready and we can manipulate this one by kubectl
Then you can see examples of replication controllers and services in kubernetes git repo
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples
34
Kubernetes / conclusionIn my opinion Kubernetes is the most progressive and
feature-rich cluster management tool nowadays.+ pluggable architecture (in future you can easily replace
docker by other container engine)+ self-healing (auto-restart, auto-replication)+ Google Container Engine (Alpha) powered by
Kubernetes+ support integration with a lot of Cloud providers+ declarative templates of all resources (json or yaml)
35
Mesosphere layers39
3. Your Apps
2. Datacenter Services YARN / Kubernetes / Marathon / Chronos / Aurora / Spark / Kafka
1. Mesosphere DCOSMesos as OS kernel
digitalocean.mesosphere.com
1: Download vpn configuration file2: Create security tunnelsudo openvpn <path_to_downloaded_conf_file>
3: Now you can communicate with cluster services
44
Docker app json example{ "container": { "type": "DOCKER", "docker": { "image": "libmesos/ubuntu" } }, "id": "ubuntu", "instances": 1, "cpus": 0.5, "mem": 512, "cmd": "while sleep 10; do date -u +%T; done"}
45
curl -X POST -H "Content-Type: application/json" http://<mesos_internal_master_ip>:8080/v2/apps -d@<path_to_json_file>
Mesosphere / conclusion
Mesosphere DCOS is future of the data centers !
Already now it is able to gather
all the zoo of technologies.
46