what’s hot in containers - amazon web services application architectures microservices monolithic...

73

Upload: lyliem

Post on 15-Apr-2018

218 views

Category:

Documents


1 download

TRANSCRIPT

What’s Hot in ContainersJay Rosenbloom

[email protected]

BRKDEV-1002

• Introduction

• New Application Architectures

• Container Review

• Docker

• CoreOS & Rocket

• Container Networking

• Container Clusters: Kubernetes, Mesos, Marathon

• Microservices Infrastructure Framework

• Nirmata – Microservices as a Service

Agenda

Cloud Native

An application designed to run in a cloud computing environment

• infrastructure agnostic – application may have resource and service requirements but it doesn’t care about the specific underlying hardware

• application components are designed as relatively simple, discoverable, re-usable services – e.g. microservices

• designed to survive failures

• designed for horizontally scaling up or down

New Application Architectures

microservices

monolithic apps

Monolithic Apps Cloud Apps

server / hypervisor, IaaS server clusters, containers

difficult to scale easy to scale

high impact to component failure built for failure, system resilience

challenging to upgrade easy to upgrade

larger dev and ops teams smaller, agile devops teams

Microservices"The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.” 1

5 Architectural Constraints of Microservices2

1. Elastic – be able to scale, up or down, independently of other services in the same application.

2. Resilient – fail without impacting other services in the same application.

3. Composable – offer an interface that is uniform and is designed to support service composition

4. Minimal, and – only contain highly cohesive entities

5. Complete – be functionally complete

“Disruptor: Continuous Delivery with Containerized Microservices” – Adrian Cockcroft

[1] Martin Fowler. Microservices. http://martinfowler.com/articles/microservices.html

[2] Jim Bugwadia. http://nirmata.com/2015/02/microservices-five-architectural-constraints/

[3] Adrian Cockcroft,. On the State of Microservices, DockerCon Europe, 2014 http://thenewstack.io/dockercon-europe-adrian-cockcroft-on-the-state-of-microservices/

Define Container

An isolated, resource controlled application environment.

An individual Linux-based runtime environment

Infrastructure embedded containers

Virtual infrastructure environments -- Cisco Virtual Application Container Services (VACS) application environments

Containers – The Building Block

History of operating system level virtualization:chroot (1982), FreeBSD Jail (2000), Linux Vserver (2001), Solaris Zones (2004), OpenVZ (2005), LXC (2008),

lmctfy (2013), Docker (2013)1, systemd-nspawn, LXD, Clear Containers (Intel)2

A container is a sandbox environment layered on top of a host OS that provides:

• Isolation – namespaces

• Resource Limits – control groups (cgroups)

2 Perspectives:

• OS Container - A lightweight virtual machine (“heavy” container)

• App Container - A means to encapsulate and deploy a software component and all of its dependencies3

[1] http://en.wikipedia.org/wiki/Operating-system-level_virtualization

[2] https://clearlinux.org/features/clear-containers

[3] https://github.com/MatApple/docker/blob/master/README.md

Containers are almost like Virtual Machines

• Containers have their own network interface (and IP address)• Can be bridged, routed... just like with Xen, KVM etc.

• Containers have their own file system• For example a Debian host can run Fedora container (and vice-versa)

• Security: Containers are isolated from each other

• Two containers can't harm (or even see) each other

• Resource Control: Containers are isolated and can have dedicated resources• Soft & hard quotas for RAM, CPU, I/O...

Virtual Machines vs. Containers

Hardware

Operating System

Hypervisor

Virtual Machine

Operating

System

Bins / libs

App App

Virtual Machine

Operating

System

Bins / libs

App App

Hardware

Hypervisor

Virtual Machine

Operating

System

Bins / libs

App App

Virtual Machine

Operating

System

Bins / libs

App App

Hardware

Operating System

Container

Bins / libs

App App

Container

Bins / libs

App App

Type 1 Hypervisor Type 2 Hypervisor Linux Containers

Containers are isolated,

but share OS and, where

appropriate, libs / bins.

Namespaces: Isolate System Resources

• Partition essential kernel structures to create virtual environments.E.g., you can have multiple processes with PID 42, in different environments;E.g., you can have multiple accounts called “frank” in different environments

• Different kinds of namespaces

pid (processes)

net (network interfaces, routing...)

ipc (System V IPC)

mnt (mount points, filesystems)

uts (hostname)

• user (UIDs)

• Namespace creation via the “clone()” system call with extra flags

• new processes inherit the parent’s namespace

• you can attach to an existing namespace (kernel >= 3.8)

Container NamespacesPID Namespace

• Processes in a PID namespace don't see processes of the whole system

• Each pid namespace has a PID #1

• pid namespaces are actually nested

• A given process can have multiple PIDs

• One in each namespace it belongs to

• So you can easily access processes of children namespace

• Can't see/affect processes in parent/sibling namespace

Host

Container

PID NamespacePID namespace isolates the Process ID, implemented as a hierarchy

PID Namespace2 (Child)

(Level 1)PID Namespace3 (Child)

(Level 1)

PID Namespace1 (Parent)

(Level 0)

P2

pid:1

pid:2

P3

P4

ls /proc

1 2 3 4

ls /proc

1

ls /proc

1

pid:4

P1pid:3

pid:1

pid:1

Containers as a Packaging Mechanism:Build once – Deploy Anywhere

• Containers combine content (the “RPM”) with context (the environment the RPM was built for)

• Containers resolve RPM dependency management shortcomings

• How to resolve situations where different RPMs require different versions of a dependency package?

• RPM built with a different tool chain does not guarantee ABI compatibility

Application

Package

Configuration

EnvironmentVariables

Process

Bootstrap

File

system

mounts

Pseudo

terminals

Networking

Security

Dependency

Packages Dependency

Packages Dependency

Packages

Container World Taxonomy

• Container Tools

• Docker, Rkt, repos/registries

• micro-OSs – CoreOS, RHEL Atomic, Ubuntu Snappy

• Cluster Control and Services

• Scheduler/Job Monitor – Marathon, Aurora

• Resource Managers – Mesos, Kubernetes

• Distributed Key/Value/lock managers –zookeeper, etcd, consul

• Service Orchestration/Management

• Kubernetes, Mesosphere DCOS, Docker Swarm, HashiCorp Terraform, CoreOS Tectonic

container / service management

physical & virtual cluster nodes

PaaS

SchedulerDistributed

Frameworks

Container Tools

applications

Cluster Services

Service Orch/Mgmt

microservices

IaaS

Docker History

• Founded in 2010 by Solomon Hykes

• Originally developed at PaaS provider dotCloud by Solomon Hykes with contributions from others.

• Docker was released as open source in March 2013

• April 2015 Docker raises another $95 million in funding bringing total to $150 million.

• Acquisitions: Kitematic, Socketplane, Koality, Orchard

As of May 21, 2015, the project had over 21,818 GitHubstars (making it the 21th most starred GitHub project), over 5,133 forks, and nearly 936 contributors Google Trends

Docker (blue) & DevOps (red)

What is Docker?In their own words

• “an open platform for developers and sysadmins to build, ship, and run distributed applications.”

• “Docker Engine, a portable, lightweight runtime and packaging tool”

• “Docker Hub, a cloud service for sharing applications and automating workflows”

• “Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.”

• “As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.”

Docker Daemon/Client/Container

docker engine

• daemon – directly manages the containers on the host

• client – communicates with the docker daemon to control containers

container – LXC or libcontainer (default)

docker --daemon=true

docker

docker

Docker Images and Containers

• Images layered via union file system – enables multiple layered file systems images to be seen as one image.

kernel /bootfs

Ubuntu base image

add open-ssl

add apache

writeable

container read-only image layers

Image Stores

• Registry – public vs. private

Docker Hub registry provides central

storage and sharing of images

• Repository – holds set of images

and metadata (ex: “Ubuntu”)

• tags – more specific (ex: “14.04”)

• top-level vs. user

• Image – Image IDs

• docker image

• Image caching

• libcontainer

copy-on-write

Docker Image Management

• Docker uses layered image builds

• Registry and Index used to manage these builds

• New Images can be created by addinglayers

• Layering model allows for specialization

• Base image and select number of layerstypically provided by OS supplier

• ISV/Customer/Community imagesenable an eco-system

• Stack optimized for individual applicationwith minimal packaging per layer

Base Distro

Base Distro

Platform Layer

Base Distro

Platform Layer

ISV Layer

Base Distro

Platform Layer

ISV Layer

Customer Content

Base Distro

Platform Layer

ISV Layer

Community Content

Customer Content

Simple Docker Workflow

DockerImage

Registry

Bin/Libs

App A

SourceRepository

Build Host

Docker EngineDockerfile

Base Image

2. Run image or create container

Image Registry/Repository

container

4. Push image

image

package/source repository

3. commit container

1. Build image

Simple Docker Workflow

1. Build an image

• Create a “Dockerfile”

• docker build .

• Automated builds

2. Push image to public Docker Hub, private Docker Hub or private Docker registry

• docker commit

• docker push

3. Run/Pull image (create container)

• docker run

• Inspect, Start, Attach

Dockerfile Example

FROM ubuntu

MAINTAINER Kevin Corbin, [email protected]

RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list

RUN apt-get update

RUN apt-get -y install git python python-pip

WORKDIR /opt

RUN git clone https://github.com/datacenter/acitoolkit

WORKDIR acitoolkit

RUN python setup.py install

ACI Toolkit Environment

https://github.com/datacenter/acitoolkit/

Building an Image with a Dockerfile• FROM – sets the base image for subsequent instructions

FROM ubuntu

• ENV – sets environment variables

ENV PATH /usr/local/nginx/bin:$PATH

• RUN – execute command and commit resulting image (build time)RUN apt-get update && apt-get install –y \

curl \

git

• ADD and COPY – add files & directories to container file system

ADD rootfs.tar.xz /

COPY requirements.txt /tmp/

• EXPOSE – indicate ports the container will listen on

EXPOSE 80

• VOLUME -- creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.

VOLUME /var/log

• CMD – execute command or provide default parameters to ENTRYPOINT (rare). One per file.

CMD [“apache”, “-DFOREGROUND”]

• ENTRYPOINT – configure container to run as an executable

ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

Docker Command Line

attach Attach to a running container

build Build an image from a Dockerfile

commit Create a new image from a container's changes

cp Copy files/folders from a container's filesystem to the host path

create Create a new container

diff Inspect changes on a container's filesystem

events Get real time events from the server

exec Run a command in a running container

export Stream the contents of a container as a tar archive

history Show the history of an image

images List images

import Create a new filesystem image from the contents of a tarball

info Display system-wide information

inspect Return low-level information on a container or image

kill Kill a running container

load Load an image from a tar archive

login Register or log in to a Docker registry server

logout Log out from a Docker registry server

logs Fetch the logs of a container

port Lookup the public-facing port that is NAT-ed to PRIVATE_PORT

pause Pause all processes within a container

ps List containers

pull Pull an image or a repository from a Docker registry server

push Push an image or a repository to a Docker registry server

rename Rename an existing container

restart Restart a running container

rm Remove one or more containers

rmi Remove one or more images

run Run a command in a new container

save Save an image to a tar archive

search Search for an image on the Docker Hub

start Start a stopped container

stats Display a stream of a containers' resource usage statistics

stop Stop a running container

tag Tag an image into a repository

top Lookup the running processes of a container

unpause Unpause a paused container

version Show the Docker version information

wait Block until a container stops, then print its exit code

Docker Security

Dan Walsh, RedHat Consulting Engineer, Container and SELinux expert:from : “Are Docker Containers Really Secure”. http://opensource.com/business/14/7/docker-security-selinux . July 22, 2014. and Bringing New Security Features to Docker. https://opensource.com/business/14/9/security-for-docker. Sept 3, 2014.

• “containers don’t contain” – i.e. don’t assume you can download random images and run the as root.

• only run applications from a trusted source.

• Treat root inside the container like root outside

• Drop privileges as quickly as possible. Run services as non-root whenever possible

• Beware: not everything in Linux is namespaced (currently only Process, Network, Mount, Hostname, IPC.)

• setenforce=1 /usr/bin/docker –d –selinux-enabled

• apply security updates regularly

Docker Security

• Docker White paper: Introduction to Container Security

read-only mount points, copy-on-write file systems, capabilities (cap_drop), secomp (disable syscalls), pid and network namespaces, device resource cgroups, SElinux, AppArmor, TOMOYO, GRSEC, PaX,

• Center for Internet Security (CIS) Docker 1.6 Benchmark

from http://benchmarks.cisecurity.org

“This document, CIS Docker 1.6 Benchmark, provides prescriptive guidance for

establishing a secure configuration posture for Docker container version 1.6.

This guide was tested against Docker 1.6.0 on RHEL 7 and Debian 8.”

Docker Networking

• Docker0 bridge – containers with unique IPs on internal network

• Docker0 host – containers share host IP

• Docker links – container to container links within a host

• libnetwork

• OVS

• LXC bridge

• IPv6

Other projects: CoreOS Flannel, Metaswitch Calico

Docker Networking

host docker0172.17.42.1/16

veth049e89e

eth0

172.17.0.1/16

veth049e88f veth049e87af

eth0

192.168.1.10

eth0

192.168.1.10

eth0

192.168.1.10

host eth0192.168.1.10

docker run --net=bridge (default) docker run --net=host

Container shares the host’s IP address and MAC address.

eth0

172.17.0.2/16

eth0

172.17.0.3/16

iptables (NAT)iptables (NAT)

1. 2.

3. docker run --name web --link db:webdb webapp

web db

Default GW for containers

Unique addresses/container

environment variables & /etc/hosts entries inserted into web container10.15.208.8

10.15.208.7 db

10.15.208.7

Docker Networking: libnetworkThe Container Network Model (CMN)

Docker Container Docker Container Docker Container

Network Sandbox Network Sandbox Network Sandbox

Endpoint Endpoint Endpoint Endpoint

Backend Network Frontend Network

• Network Sandbox -- isolated environment where container network configuration lives

• Endpoint – network interface tied to a specific network

• Network – a uniquely identifiable collection of Endpoints that are able to communicate with each other

A pluggable interface. Expected to first ship in Docker 1.7. Distributed bridge plugin under development.

Docker Projects

• Swarm -- a Docker-native clustering system.

• Machine – kind of a generic boot2docker. docker run on local machine or various clouds

• Compose -- Define and run multi-container applications using Docker (formerly Fig). Docker links used to enable communication between container within a host.

• Example: Use Swarm and Machine to create a large container cluster. Then use Compose to launch a multi-container app onto it. Swarm schedules the container deployments.

• Libs under construction: libnetwork, libtrust

CoreOS, Inc.

Founded in January 2013 by

• Brandon Philips, developer at SUSE and Rackspace

• Alex Polvi, Mozilla, CloudKick, Rackspace

• $20M in funding to date including recent $12M led by Google Ventures + others

Initial Focus:

• CoreOS -- a minimal Linux image that can be used as a base image for a containers.

• automatic update system – 2 partitions; upgrade backup partition and reboot (like SSO)

• etcd – distributed configuration store with RESTful API.

App Container Spec

“App Container (appc) is a well-specified and community developed specification that defines an image format, runtime environment and discovery mechanism for application containers.”

The App Container (appc) spec aims to have the following properties:

• Composable - All tools for downloading, installing, and running containers should be well integrated, but independent and composable.

• Secure - Isolation should be pluggable, and the cryptographic primitives for strong trust, image auditing and application identity should exist from day one.

• Decentralized - Discovery of container images should be simple and facilitate a federated namespace and distributed retrieval. This opens the possibility of alternative protocols, such as BitTorrent, and deployments to private environments without the requirement of a registry.

• Open - The format and runtime should be well-specified and developed by a community. We want independent implementations of tools to be able to run the same container consistently.

CoreOS Rocket

• Announced in a blog post by Alexander Polvi, December 1, 2014

• Rocket is an application container runtime for Linux

• Rocket can run Docker containers but its default image format and runtime environment is based on an open spec initially developed by CoreOS simply named App Container (AppC).

• No daemon needed. Requires the following command line tools

• rkt, for fetching and running images

• actool for building images (several others are available)

• Recent vendor support includes: Google Kubernetes, VMware Photon

• Recent feature release included support for pods – similar to construct in Kubernetes

Rocket – Simple Example

From github.com/coreos/rkt/README.md:

rkt trust --prefix coreos.com/etcd

Prefix: "coreos.com/etcd"

Key: "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg"

GPG key fingerprint is: 8B86 DE38 890D DB72 9186 7B02 5210 BD88 8818 2190

CoreOS ACI Builder <[email protected]>

Are you sure you want to trust this key (yes/no)? yes

Trusting "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg" for prefix "coreos.com/etcd".

Added key for prefix "coreos.com/etcd" at "/etc/rkt/trustedkeys/prefix.d/coreos.com/etcd/8b86de38890ddb7291867b025210bd8888182190

rkt fetch coreos.com/etcd:v2.0.4

rkt: searching for app image coreos.com/etcd:v2.0.4

rkt: fetching image from https://github.com/coreos/etcd/releases/download/v2.0.4/etcd-v2.0.4-linux-amd64.aci

Downloading aci: [========================================== ] 3.47 MB/3.7 MB

Downloading signature from https://github.com/coreos/etcd/releases/download/v2.0.0/etcd-v2.0.4-linux-amd64.aci.asc

rkt: signature verified:

CoreOS ACI Builder <[email protected]>

sha512-1eba37d9b344b33d272181e176da111ev

rkt run coreos.com/etcd:v2.0.4

CoreOS with Docker

CoreOS Host

systemd

docker containers

$ sudo rkt –insecure-skip-verify fetch docker://redis

... (docker2aci converts docker image to ACI)

sha512-962bae14761e5e1ec121e4d49d010f29

$ sudo rkt run sha512-962bae14761e5e1ec121e4d49d010f29

$ sudo rkt –insecure-skip-verify fetch docker://ubuntu

$ sudo rkt run –interactive=true <image ID>

CoreOS Projects• etcd – sync cluster state, distributed key-value store, lock

management, leader election (Raft). Flannel stores routing in etcd. etcd is used by Kubernetes

• flannel – builds overlay network across machines. Used by Kubernetes.

• fleet – cross-cluster scheduler, combines systemd and etcd into a distributed init

• tectonic – “Tectonic is a platform combining Kubernetes and the CoreOS stack. Tectonic pre-packages all of the components required to build Google-style infrastructure and adds additional commercial features, such as a management console for workflows and dashboards, an integrated registry to build and share Linux containers, and additional tools to automate deployment and

customize rolling updates.”

• Enterprise Registry (powered by Quay.io) – private registry, public and private options

Other Container OSs

• RedHat RHEL 7 Project Atomic Host (March 2015) – fast transactional updates with rollback, security (SELinux), Docker support, Kubernetes support, super-privileged containers

• Snappy Ubuntu Core (Dec 2014) – fast transactional updates with rollback, security (AppArmor), Docker support

• VMware Photon (April 2015) – support for Docker, rkt and Garden

Cisco Plugin – Advanced Container Networking

Virtual Switch

Container1 Container2Host

Cisco

Plugin

Uplink Connectivity: To Spine

Extending VLANsExtending

VXLANs

• Cisco Plugin on the host:

• Runs as separate process/container

• Can carry meta-data to apply network/policy intent

• Allows various backend

• Container Runtime: Docker/Rocket

• Scheduler Integration: Kubernetes Mesos, Docker-Swarm

• ACI on Leaf:

• VLAN/VXLAN based EPG Hand off to leaf; can do MACVLAN, SRIOV, Linux/OVS based VXLAN

• Requires discovering Leaf <-> Host connectivity

• ACI on Host:

• OpFlex with OVS on Host

Key Integration Points

Host Plugin

Host Host

Orchestrator

APIC

Plugin

Data Path

Driver

Management Interface

Policy InstantiationOpFlex

(if)

Policy Application

1. APIC orchestration plugin

2. Host Plugin

ACI: VMs, Bare metal, and Containers

ACI Fabric

SECURITY

Trusted

Zone

DB

TierDMZ

External

Zone

APP DBWEBEXTERNAL

ACI

PolicyACI

Policy

ACI

Policy

FW

ADC

Virtual Machines Docker Containers Bare-Metal Server

HYPERVISORHYPERVISORHYPERVISOR

Application Network Profile

The Data Center is just another Form Factor

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines by Luiz André Barroso and Urs Hölzle, Morgan & Claypool Publishers (2009)

Kubernetes (K8s)

• Open source Container Cluster Manager based on Google’s own container management system, code named Borg and later Omega1

• Released June of 2014. Google Product Manager Craig Mcluckie.

• As of June 11, 2015: 1770 forks and 8302 stars on GitHub

• https://github.com/GoogleCloudPlatform/kubernetes

• Currently Pre-Production Beta. 1.0 release scheduled for July 21, 2015

• Enables application elastic scalability and resiliency with efficient use of resources

• Can run anywhere – on bare metal, VMs or in the cloud: GCE, AWS, Azure

• Supports Docker containers, etcd and flannel from CoreOS

• Recently announced intent to support AppC and rkt

• Other projects supporting K8s: OpenStack Magnum, RedHat OpenShift 3, Mesosphere DCOS

[1] “Large Scale Cluster Management at Google with Borg”

Kubernetes

“The initial value of containers is really that you can run it on your laptop and then you deploy the same thing in the cloud. That is [a] great thing and Docker did a particularly great job on that, but what do you do then?

Kubernetes answers that question, which is you run a fleet of containers where you have a controlled way to upgrade them, you have a controlled way to send to send them traffic, you can scale a service in terms of the number of containers that are included in running it, so that you can increase capacity as your load goes up.

These kind of operational things are really, I think, the important contribution of Kubernetes.”

Eric Brewer, VP of Infrastructure, Google:

Harris, Derick. “Google Systems Guru Explains Why Containers are the Future of Computing”, Medium. 15 May 2015. 17 May 2015.

https://medium.com/s-c-a-l-e/google-systems-guru-explains-why-containers-are-the-future-of-computing-87922af2cf95

Kubernetes Architecture Pod - a co-located group of Docker containers with

shared volumes. They're the smallest deployable units that can be created, scheduled, and managed with Kubernetes.

Service - provide a single, stable name and address for a set of pods. They act as basic load balancers.

Label - are used to organize and select groups of objects based on key/value pairs.

Replication Controller -. ensures that a specified number of pod "replicas" are running at any one time.

master server

apiserver

etcd

controller manager

scheduler

skydns

podC11

C12

C13

nodekubelet

kube-proxy

podC21

C22

C23

podC41

C42

C43

pod

C14

C15

pod

C44pod

C24

pod

C44

C45

pod

C24

podC11

C12

C13

podC21

C22

C23

service(IP addr/DN)replication controller service(IP addr/DN)

cluster

nodekubelet

kube-proxy

nodekubelet

kube-proxy

nodekubelet

kube-proxy

nodekubelet

kube-proxy

Kubernetes Networking

Based on CoreOS Flannel

Repeat: The Data Center is just another Form Factor

“My other computer is a data center” – Ben Hindman, Co-Founder of Mesosphere

Mesos

• Core technology from UC Berkeley AMPlab:

Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center. Benjamin

Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy Katz, Scott Shenker, Ion Stoica. University

of California, Berkeley. September 2010

• Multi-resource scheduling (memory, CPU, disk and ports)

• Support for Docker containers

• Top-level Apache project - mesos.apache.org

• Scalability to 10,000s of nodes

• Large scale production use: Twitter, Airbnb, Apple, eBay, Bloomberg, Two

Sigma

Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications. Mesos can run Hadoop, Jenkins, Spark, Aurora and many other applications on a shared pool of nodes.

Mesos – Static vs. Dynamic Partitioning

Static

Mesos – Static vs. Dynamic Partitioning

Dynamic

Manage Many Frameworks on One Cluster

Framework ≈ Distributed System

• Abstracts cluster resources for frameworks – frameworks need not be concerned with machines whether physical or virtual.

• Mesos provides level of abstraction below PaaS but above IaaS.

Framework Ecosystem

IaaS“machines”

Mesosresources

PaaSapps/services

Mesos Resource Management

Common Model: a job specifies its requirements and is scheduled when its requirements can be satisfied.

Problem: Job may have to wait. Requirements for a distributed job can be highly dynamic and thus hard to specify.

Mesos solution:

• A job specifies point in time requirements

• Mesos makes best offer (which might not exactly satisfy the requirements). Offers represent the current snapshot of available resources that a framework can use.

Mesos – A Two-level Scheduler

Example: Apache Spark

Spark requests resources from Mesos

Mesos makes offer to Spark

Spark decides which tasks to run and submits them to Mesos

Mesos slave process on slave nodes launches task directly or a client framework executor that manages the launching of tasks (could be fork/exec or threads).

Mesos Architecture

framework schedulers

mesos slave

executor

tasks

executor

threads

tasks

Why Run Mesos?

Multi-tenancy

• Run multiple frameworks on the cluster simultaneously

• Run multiple instances of a given framework simultaneously

Fine Grained Sharing

• Mesos isolates resources using cgroups and namespaces

• Fine grained sharing by changing resource allocations on the fly

Fault-tolerant – Master failover, tasks keep running, slave agent can fail and tasks continue to run. Framework can failover without losing tasks.

Statistics collection

Mesos Related Work in Progress

• Myriad – multiple YARN instances

• HDFS on Mesos

• Stateful Frameworks

Mesos Ecosystem

Marathon – Scheduler for Long Running Jobs

“Need to run N of something on M machines?” --Connor Doyle

• Scale up/down

• Change the cluster size

• Handle failures

“A distributed init for long running services”

Marathon

• RESTful API

• Service descriptors in JSON

• Placement constraints

• Health checks

• Dependencies

• Rolling deployment

• Docker support

“Consensus Systems” – Camille Fournier

• Distributed key/value store, lock manager – shared configuration, service discovery, leader election (master/backup).

• Consensus algorithms: Paxos, Raft

• Atomic Broadcast algorithm: Zab

• Examples:

• Apache Zookeeper – thick client/agent

• CoreOS etcd – no client, REST

• HashiCorp consul – “batteries included”

• Service Discovery – dns, http

• Monitoring/Health Checking – push on change coupled with liveness checking (Serf)

• Configuration – hierarchical key/value store, http, locking, long polling, acls

• Orchestration

• Multi-data center

Mesosphere

• Founded in 2013 by Florian Leibert, Ben Hindman and Tobi Knaup, all with web scale engineering experience from the likes of Twitter and Airbnb.

• 2014 – Headquartered in San Francisco with international operations in Hamburg, Germany

• 2015 – $48.8M in funding to date by Tier 1 investors: Andreessen Horowitz & Khosla Ventures and others.

• Product: Data Center Operating System (DCOS) built on top of Mesos, Marathon and Chronos. Docker & Linux container support. Future support for Kubernetes.

• Data Center Dashboard, Data Center CLI

OpenStack and Containers

Today:• LXC driver for Nova

• Nova-docker virt driver for docker

• Heat resource for Docker

In Progress: Project Magnum• Container oriented lifecycle management: scheduling, orchestration

and process control

• Container orchestration system comprises a “bay”

• Initial bay types: Kubernetes & Swarm

• API resources for k8s bay-type: container, pod, service, replication controller, node

• Advantages: multitenancy (scale port # space), security

• https://wiki.openstack.org/wiki/Magnum

Cisco Cloud Microservices Framework

• Microservices infrastructure is a modern platform for rapidly deploying globally distributed services

• Multi-datacenter support

• High availability

• Security

• Open source project on Github:

https://github.com/CiscoCloud/microservices-infrastructure

• Mesos for efficient resource sharing

• Marathon for management of long running services

• Consul for service discovery

• Vault for managing secrets

• Docker container runtime

Cisco Cloud Microservices FrameworkArchitecture

Cisco Cloud Microservices FrameworkArchitecture

Nirmata Microservices Operations & Management Cisco Cloud Market Place and Cisco Inter-Cloud

A software-as-a-service solution for the operations and management of cloud-native applications.

• Application Blueprints

• Policies

• Application Orchestration

• Service Networking

• Multi-cloud

• Analytics

Nirmata Microservices Operations & Management Cisco Cloud Market Place and Cisco Inter-Cloud

Nirmata Microservices Operations & Management Cisco Cloud Market Place and Cisco Inter-Cloud

Summary

• Containers and the related tools discussed in this session have the potential to enable customers to build highly scalable and resilient applications that enable rapid innovation.

• It’s “early days” and the technology is rapidly evolving.

• Cisco is driving open innovation in container networking that melds with ACI and open container/cluster management frameworks.

• Look for opportunities to leverage microservices with Cisco Cloud Services.

• Open source makes it easy to start learning now.

Participate in the “My Favorite Speaker” Contest

• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)

• Send a tweet and include

• Your favorite speaker’s Twitter handle @jrosenbl

• Two hashtags: #CLUS #MyFavoriteSpeaker

• You can submit an entry for more than one of your “favorite” speakers

• Don’t forget to follow @CiscoLive and @CiscoPress

• View the official rules at http://bit.ly/CLUSwin

Promote Your Favorite Speaker and You Could Be a Winner

Complete Your Online Session Evaluation

Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online

• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.

• Complete your session surveys through the Cisco Live mobile app or your computer on Cisco Live Connect.

Continue Your Education

• Demos in the Cisco campus

• Walk-in Self-Paced Labs

• Table Topics

• Meet the Engineer 1:1 meetings

• Related sessions

Thank you