lessons learned in reaching multi-host container networking
TRANSCRIPT
Docker networking Lessons learned in reaching multi-host container networking
Tony GeorgievSoftware Engineer, Cloud Automation Platform at VMware
2
History• Building container management solution long long time ago (last October) -
https://github.com/vmware/admiral
• Intelligent policy based scheduler
• Deploying connected containers on single host
• Deploying dis-connected containers on multi hosts
3
Admiral’s schedulerdeploys to multi-hosts
Docker relsease timeline source:http://www.slideshare.net/Docker/docker-networking-control-plane-and-data-plane
4
State of networking pre Docker 1.9• Single host Container-2-container communication with docker links (legacy)
• Network mode: none, host, bridge (docker0)
• 3rd party drivers (Flannel, Weave, Calico)
5
What we tried• DNS
• DNS load balancing (AKA poor man’s load balancing)
• The standard HAProxy container as ambassador
• Custom built HAProxy based container as ambassador – agent
6
Our (old) networking solution
Host A Host B Host C
Network
Agent Service A Agent Service B Agent DB
Service B /etc/hosts 172.17.0.1 service-b 172.17.0.1 db
bind 172.17.0.1:80 …bind 172.17.0.1:3306 …
7
Agent specs• Based on the Ambassador linking pattern
• Written in GO
• Docker image based on Alpine and PhotonOS
• Based on HAProxy with zero downtime reloading
• Configuration is pushed from the orchestrator
• Layer 4 routing (based on source ips and ports)
• Load balancing
8
Pros• Unobtrusive, can be deployed on any host
• Does not require any 3rd party drivers or manual host setup
• Docker compose compatible (legacy links)
• Same definition that was used before for a single host
• Works the same on single as well on multi hosts
9
Cons• Different than tools Ops are comfortable with
• Requires service’s ports to be exposed.
• 1 port per service
• Agent/container that needs to be deployed and managed
• Not compatible with newer Docker compose having networks, i.e. different that how people build apps.
10
State of networking in Docker 1.9-1.12• Acquired Socketplane.io
• Native multi-host networking (overlay)
• Control plane requires shared KV store (1.9+) or Swarm mode (1.12) (gossip based)
• User defined networks (user defined bridge, isolated from other bridges)
• Plugins & Drivers
11
Docker networking under the hood • DNS (inside the host)
• DNS based load balancer (1.11)
Graphic source: https://sreeninet.wordpress.com/2016/07/29/service-discovery-and-load-balancing-internals-in-docker-1-12/
12
Docker networking under the hood • IPVS (IP Virtual Server) – Layer 4 load balancer
Load balancer based on VIP & IPVS (on every container) (1.12 swarm mode)
Graphic source: https://sreeninet.wordpress.com/2016/07/29/service-discovery-and-load-balancing-internals-in-docker-1-12/
13
Docker networking under the hood • VXLAN (Virtual extensible LAN) – network virtualization tunneling protocol
• Every host is VTEP (VXLAN Tunnel Endpoint)
• Secure dataplane (IPSec)
14
New networking solution
Host A Host B Host C
Agent Service A Agent Service B Agent DB
Service B
KV store(etcd, zookeeper, consul,
Admiral)
Network (underlay)
VXLANtunnel
VXLANtunnel
VTEP
VTEP
VTEPDNS
15
Demo• https://github.com/tgeorgiev/docker-meetup
16
Useful resources• https://www.youtube.com/watch?v=Gwdo3fo6pZg (Docker networking deep dive by Madhu
Venugopal and Jana Radakrishnan @dockercon 16)
• http://nerds.airbnb.com/smartstack-service-discovery-cloud/
• https://sreeninet.wordpress.com/2016/07/29/service-discovery-and-load-balancing-internals-in-docker-1-12/
• http://blog.nigelpoulton.com/demystifying-docker-overlay-networking/ (part of “Docker for Sysadmins” book)
• https://www.percona.com/blog/2016/08/03/testing-docker-multi-host-network-performance/
• https://medium.com/@lherrera/poor-mans-load-balancing-with-docker-2be014983e5#.c4gwgye25
Thank you.