running production-grade kubernetes on aws

Post on 07-Jan-2017

676 Views

Category:

Technology

5 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Running Production-Grade Kubernetes on AWS

2

<vadim@doit-intl.com>

3

Let’s Play

Join at kahoot.it with Game PIN:728274

4

Agenda

● What’s new in Kubernetes v1.3● Bootstrapping K8s cluster on AWS● Watchouts & Limitations!

Copyright 2015 Google Inc

Kubernetes 101

Replication controllers create new pod "replicas" from a template and ensures that a configurable number of those pods are running.

Services provide a bridge based on an IP and port pair for client applications to access backends without needing to write code that is Kubernetes-specific.

Replication Controllers ServicesLabels

Labels are metadata that are attached to objects, such as pods.

They enable organization and selection of subsets of objects with a cluster.

Pods

Pods are ephemeral units that are used to manage one or more tightly coupled containers.

They enable data sharing and communication among their constituent components.

6

What's new in Kubernetes 1.3

7

Release Highlights ● Init Containers (alpha)● Fixed PDs● Cluster Federation (alpha)● Optional HTTP2● Pod Level QoS Policy● TLS Secrets● kubectl set command● UI● Jobs● RBAC (alpha, experimental)● Garbage Collector (alpha)● Pet Sets● rkt runtime● Network Policies● kubectl auto-complete

8

Init Containers

9

Init Container: register pod to external service

10

Init Container: clone a git repo into a volume

11

Jobs (pods are *expected* to terminate)

Creates 1...n pods and ensures that a certain number of them run to completion.

3 job types:

● Non-Parallel (normally only one pod is started, unless the pod fails)

● Parallel with fixed count (complete when there is one successful pod for each value in range 1 to .spec.completions)

● Parallel with a work queue

12

Job: Work Queue with Pod Per Work Item

13

Increased Scale

● Up w/ up to 2k nodes per cluster● Up to 60k pods per cluster

Under the bonnet, the biggest change that has resulted in the improvements in scalability is to use Protocol Buffer-based serialization in the API instead of JSON.

14

Multi-Zone Clusters

Deploy clusters to multiple availability zones to increase availability:

● Multiple zones can be configured at cluster creation or can be added to a cluster after the fact.

15

Heterogeneous Clusters

Customers can now add different types of nodes to the same cluster.

● NodePools allow for different types of nodes to be joined to a single master, minimizing administrative overhead

● Built-in scheduler changes to allow scheduling to node types with only a configuration change

16

Cluster Federation

Deploy a service to multiple clusters simultaneously (including external load balancer configuration) via a single Federated API.

● Federated Services span multiple clusters (possibly running on different cloud providers, or on premise), and are created with a single API call.

● The federation service automatically:○ deploys the service across multiple clusters in the federation○ monitors the health of these services○ manages DNS records to ensure that clients are always

directed to the closest healthy instance of the federated service.

More info:● Sneak peek video

17

New kubectl commands

A new command kubectl set now allows the container image to be set in a single one-line command.

$ kubectl set image deployment/web nginx=nginx:1.9.1

To watch the update rollout and verify it succeeds, there is now a new convenient command: rollout status. So, for example, to see the rollout of nginx/nginx:1.9.1 from nginx/nginx:1.7.9:

$ kubectl rollout status deployment/web

Waiting for rollout to finish: 2 out of 4 new replicas has been updated...Waiting for rollout to finish: 2 out of 4 new replicas has been updated...Waiting for rollout to finish: 2 out of 4 new replicas has been updated...Waiting for rollout to finish: 3 out of 4 new replicas has been updated...Waiting for rollout to finish: 3 out of 4 new replicas has been updated...Waiting for rollout to finish: 3 out of 4 new replicas has been updated...deployment nginx successfully rolled out

18

clusters can now automatically request more compute when the have scheduled more jobs than there is CPU or memory available

● If there are no resources in the cluster to schedule a recently created pod, a new node is added.

● If a nodes is underutilized and all pods running on it can be easily moved elsewhere, then the node can be drained and deleted.

● Pay only for resources that are actually needed and get new resources when the demand increases.

Cluster Autoscaling (alpha)

19

Improved dashboard

Manage Kubernetes almost entirely through a web browser.

● All workload types are now supported, including DaemonSets, Deployments and Rolling updates

20

Minikube

Minikube is a new local development platform for Kubernetes, so customers can begin developing on their desktop or laptop.

● Packages and configures a Linux VM, Docker and all Kubernetes components, optimized for local development

● Can be installed with a single command● Alongside the regular pods, services and controllers, supports advanced

Kubernetes features: ● DNS● NodePorts● ConfigMaps and Secrets● Dashboards

21

The new "PetSet" object provides a raft of features for supporting containers that run stateful workloads (such as databases or key value stores), including:

● Permanent hostnames, that persist across restarts

● Automatically provisioned Persistent Disks per-container, that live beyond the life of a container

● Unique identities in a group, to allow for clustering and leader election

● Initialization containers, which are critical for starting up clustered applications

Stateful workload support (Pet Sets)In Alpha in Kubernetes 1.3

22

What's coming next

23

New features for Kubernetes in 1.4● Full cross-cluster federation, including

○ Single universal API

○ Global load balancer

○ Replica sets that span multiple clusters

● Granular permissions for clusters

● Simplified installation for common applicationsOne line install for simple applications in fully tested configurations

● Universal setupGreatly simplified on-prem and complex cloud deployments

● Integrated external DNS (including Route53)Simplified integration with external DNS providers

Expected release date for 1.4 is 16 September

24

Deploying K8s to Amazon AWS

25

What we wanted to achieve...

26

4.5 Step Deployment into existing VPC

Based on CoreOS K8s project:

$ kube-aws init & adjust your cluster.yaml

$ kube-aws render (generates CF stack)

$ kube-aws validate

$ kube-aws up (deploys the CF stack)

27

What you get...

CloudFormation Stack w/:

● Controller (master) node with EIP

● Autoscaling Group/Launch Config for Worker Nodes (fixed scaling)

● A Record in Route53 for Controller

● Security Groups to allow traffic between controller and works

● IAM Roles for both Controller and Workers

● AWS Addons (ELB, EBS integration)

28

Watchouts!

etcd high availability - build your own etcd cluster and expose it with internal ELB (CF stack)

default TLS keys 90-days expiration - replace generated TLS assets with your own

master/controller sizing - m3.xlarge for < 100 nodes - m3.2xlarge for < 250 nodes - c4.4xlarge > 500 nodes

29

Limitations

can’t deploy the cluster into existing subnets - the fix is on the way in 0.9

pv/pvc are available only in the same zone - because ebs volumes available in single AZ

30

Scaling the cluster

31

Exposing Services

$ kubectl expose deployment nginx --port:80 --type=”LoadBalancer”

kind: ServiceapiVersion: v1metadata: name: nginx annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Externally with ELB (nodePort implementation)

Internally with ELB:

32

Persistent Volumes/Claims

EBS Volumes (available in single AZ)

EFS Volumes (multi AZ but with require manual recovery)

33

Spot Instances

Import ASG to Spotinst’s Elastigroup

34

meetup.com/multicloudmeetup.com/Kubernetes-Tel-Aviv

Next meetups:

top related