containers and virtual machines at scale: a …...containers and virtual machines at scale: a...

Post on 20-May-2020

18 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Containers and Virtual Machines at Scale: A Comparative Study

Prateek SharmaLucas ChaufournierPrashant Shenoy

Y.C. Tay

1

Virtualization in Data Centers and Clouds• Data centers and clouds host multiple applications• Virtualization and cluster management frameworks used for

managing and multiplexing resources• Virtualization options: Hardware-level and Operating System-level

2

Virtualization Layer

Cluster Management Framework(OpenStack, Kubernetes…)

Virtualization LayerVirtualization Layer

System Virtualization Today

3

Hardware Operating System

Virtual Machines Containers

Example VMWare, Xen, … Docker, LXC, …

Management OpenStack,… Kubernetes,…

Performance Noticeable overhead “Lightweight virtualization”

Guest OS Operating System

Problem Statement

• How do they compare in multi-tenant data centers and cloud environments?1.What are the tradeoffs in application performance? 2.How do they affect resource management and deployment?3.What is their influence on software development?

4

• Hardware virtualization predominant (especially in public clouds)• OS virtualization (Containers) is being rapidly adopted

Guest OSvs.VM Container

Outline

• Motivation and problem statement• Virtualization background• Performance comparison• Managing and deploying applications• Hybrid virtualization approaches• Related work• Conclusion

5

Hardware Virtualization• Hypervisor exposes virtual hardware (CPU, memory, I/O)• Applications run on top of a guest Operating System• Multiple layers between application and hardware may reduce

performance

6

Hardware

Hypervisor

Virtual H/W Virtual H/W

Guest OS

ApplicationVirtualMachine

Guest OS

Application

Xen, KVM,VMWare…

Linux, Windows…

Operating System Virtualization• Lightweight virtualization of OS resources and interfaces• Applications run in Containers and share the underlying OS• Cgroups in Linux for isolating resources (cpu, mem, I/O,…)• Namespaces to isolate process hierarchies, sockets, filesystem,…

7

Hardware

Operating System

Application

Container

Application

Linux, BSD, Solaris

LXC Jails Zones

VirtualizationLayer

Outline

• Motivation and problem statement• Virtualization background• Performance comparison• Managing and deploying applications• Hybrid virtualization approaches• Related work• Conclusion

8

Performance Evaluation Setup

• KVM for hardware virtualization vs. Linux Containers (LXC)• Containers and VMs given same physical resources using cgroups• CPU, memory, I/O intensive benchmarks• In multi-tenant environments, applications can face interference• Performance depends on type of co-located application:

9

Application 1 2

App-1 App-2

Competing

App-1 App-2

Orthogonal

App-1 App-2

Adversarial(Intentional starvation)

CPU Isolation

• Container and VM CPUs pinned• Containers can face starvation due to shared OS CPU scheduler• Shared OS or hypervisor components can reduce isolation

10

VMs

Containers

Kernel compile slowdown relative to no-interference0 0.2 0.4 0.6 0.8 1 1.2 1.4

Competing Orthogonal Adversarial

Did Not Finish

10-25% slowdown

Kernel-compile File-bench Fork-bomb

• Map applications to servers carefully to minimize interference

I/O Isolation

• Equal allocation of I/O bandwidths using cgroups• Disk I/O handled by hypervisor (virtIO in KVM)• Both VMs and containers have shared I/O

paths, causing interference

11

VM 1

Hypervisor

VM 2

File-bench Kernel-compile Bonnie++

VMs

Containers

File-bench latency0 0.22 0.44 0.66 0.88 1.1 1.32 1.54 1.76 1.98 2.2

Competing Orthogonal Adversarial

High variance

High average

Outline

• Motivation and problem statement• Virtualization background• Performance comparison• Resource management and deployment• Hybrid virtualization approaches• Related work• Conclusion

12

Resource Overcommitment

13

SSecJBB0

5000

10000

15000

20000

25000

30000

ThrouJhSut

lxc vm

• Memory overcommitted by 2x for SpecJBB• Excessive page faults in guest OS,

because VM memory limits are hard• Performance depends on overcommitment

capabilities of virtualization layer

40%

• Cluster managers overcommit resources to increase efficiency• Allocate more resources than are actually available

• Hypervisors support overcommitment mechanisms for VMs • OS overcommits container resources using soft-limits

• Free (but allocated) resources can be used by other applications

Migration• Live-migration used for load balancing and consolidation• VM migration implemented in all hypervisors• Container migration is not production-ready

• Involves careful preservation of OS state (process table, network sockets, open file descriptors,…)

• Containers can instead be checkpointed and restarted

14

Memory size (GB) Container VMKernel Compile 0.42 4

YCSB 4 4SpecJBB 1.7 4File bench 2.2 4

• Container memory state is smaller • VMs must also save guest OS and cache pages

Getting Applications Into Production

15

• Docker: 2-6x fasterTime (s) VM (Vagrant) DockerMySQL 236 129NodeJS 304 49

Base OSStandard Libraries

ApplicationContainer images have only application layerVMs contain entire OS, and have larger images

• Getting applications from development to production involves creating disk images

• Fast image creation enables rapid testing and continuous deployment

Install App Application Image

Base Image(Ubuntu 14.04)

Deploy

Big Picture

• Performance depends on workload and neighbors • Interference because of shared resources

• Different application management and deployment capabilities • Neither is best for all use-cases and scenarios• Can we get the best of both worlds?

16

Image Management Software Engineering

MigrationResource Allocation

Overcommit

MemoryCPU Disk NetworkIsolation

Hardware Virtualization

OSVirtualization

No clear winnerLimits

Outline

• Motivation and problem statement• Virtualization background• Performance comparison• Managing and deploying applications• Hybrid virtualization approaches• Related work• Conclusion

17

Containers Inside VMs

18

.ernel CRmSLleYC6B-read

YC6B-XSdate0.0

0.2

0.4

0.6

0.8

1.0

1.2

RXnn

Lng

TLm

e Re

latLY

e tR

LXC

lxc lxcYm Ym

• Performance comparable to VMs

Hypervisor

ContainerApplication

VM • Run containers inside VMs • Isolation and security of VMs• Image-management, provisioning of

containers

Promising architecture which combines benefits of today’s hardware and OS virtualization

Lightweight VMs• Heavily optimized and stripped-down hardware VMs• Make VMs more like containers• Isolation properties of VMs + low overhead of containers

• Intel ClearLinux, VMware Project Bonneville• Directly access host file system. No virtual disk abstraction.

19

Boot-time (s)

Container 0.3ClearLinux 0.8

KVM ~10

• Fast boot-times enable VMs to be used in container pipelines• Quick deployment tests, rapid

scaling,…

Related Work

• Felter et.al. — Single container vs. VM performance study• Agarwal et.al. — Comparison of memory footprint • Containers in HPC (Ruiz et.al) , data processing (Xavier et. al) • Alternate virtualization architectures — Unikernels, NoHype

(Keller et.al)

20

Conclusion

• Containers and VM represent two choices in system virtualization• Different tradeoffs in performance isolation, deployment• “Right platform” depends on workload, environment, and use-case• Nested containers and lightweight VMs are two promising hybrid

strategies• Fast changing landscape: change in performance and

capabilities will require continuing evaluation

21

Thank YouQuestions?

22

http://cs.umass.edu/~prateeks

23

Backup Slides

Baseline

24

lRad read uSdateYCSB 5edLs RSeratLRns

050

100150200250

Late

ncy

(ms)

lxc Ym

FLOebenFh Throughput

02000400060008000

1000012000

2pe

ratLo

ns/s

eFon

dFLOebenFh /atenFy

0.00.20.40.60.81.0

ms

OxF vm

CPU with different Cgroups setting

25

25% 50% 75%CPU allocation

0

50000

100000

150000

200000

SSec

-%%

thUo

uJhS

utCPU-set CPU-shaUe

More Interference

26

LXC V00.0

0.2

0.4

0.6

0.8

1.0

6Sec

JBB

thrR

XJhS

Xt

ReO

atLv

e tR

nR

Lnte

rfer

ence

CRmSetLnJ 2rthRJRnaO AdverVarLaO

LXC V00.0

0.2

0.4

0.6

0.8

1.0

RXbL

V th

rRXg

hpXt

R

eOat

Lve

tR n

R Ln

terf

eren

ce

CRmpetLng 2rthRgRnaO AdverVarLaO

Today’s Virtualization

• Applications run inside Virtual Machines

• Vmware, Xen, KVM,…• Can choose own guest OS

27

Hardware Virtualization Operating System Virtualization• Applications run inside

Containers• Docker, Jails, LXC,…• Applications share the OS• Lightweight virtualization

Guest OS

VMContainer

• Cluster-wide management tools help users deploy applications:• Placement of applications • Horizontal scaling

Application Deployment

28

• VMs contain entire OS, and have larger images• Docker stores only differences (application layer)

Image size VM LXC DockerMySQL 1.68 GB 0.4 GB 112 KBNodeJS 2.05 GB 0.6 GB 72 KB

ApplicationCopy & Deploy

Disk Image

Docker: 2-6x smaller

Ubuntu 14.04

Base Install app

App

29

Base line performance

Isolation: CPU, Memory, Disk, Network

Resource Overcommitment,migration

Image Creation

SwEngg

top related