docker session 8 workflow
Post on 15-Apr-2017
220 Views
Preview:
TRANSCRIPT
Use this title slide only with an image
Docker Part 8 Workflow Dawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 2Internal
Development Work flow with Docker
$ docker pull training/namer
Our training/namer image is based on the Ubuntu image.
It contains:
• Ruby.
• Sinatra.
• Required dependencies.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 3Internal
Adding our source code
$ git clone https://github.com/docker-training/namer.git
$ cd namer
$ ls
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 4Internal
Creating a container from our image
$ docker run –d \ -v $(pwd):/opt/namer \ -p 80:9292 \ training/namer
• The -d flag indicates that the container should run in detached mode (in the
background).
• The -v flag provides volume mounting inside containers.
• The -p flag maps port 9292 inside the container to port 80 on the host.
• training/namer is the name of the image we will run.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 5Internal
Mounting volume inside containers
[host-path]:[container-path]:[rw|ro]
The -v flag mounts a directory from your host into your Docker container. The flag
structure is:
• If [host-path] or [container-path] doesn't exist it is created.
• You can control the write status of the volume with the ro and rw options.
• If you don't specify rw or ro, it will be rw by default.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 6Internal
Checking our new container & Viewing our application
$ docker ps
http://<yourHostIP>:80
Making change to our application on go
$ vi company_name_generator.rb
http://<yourHostIP>:80
Use this title slide only with an image
Docker Part 9 Debugging Dawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 8Internal
Debugging inside the container
Docker introduced a feature called docker exec.
It allows users to run a new process in a container which is already running.
It is not meant to be used for production (except in emergencies, as a sort of pseudo-
SSH), but it is handy for development.
You can get a shell prompt inside an existing container this way.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 9Internal
Docker exec example
$ docker exec –it <yourContainerID> bash
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 10Internal
Docker ps command
ps –s
ps –l
ps –t
ps –m
ps –a
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 11Internal
Docker top command
sudo docker top <yourcontainerID>
sudo docker top <yourconatinerID> -aef
top
Docker top command provides information about the CPU, memory, and swap usage if you run it inside a Docker container. In case you get the error error - TERM environment variable not setwhile running the top command inside the container, perform the following steps to resolve it: Run the echo $TERM command. You will get the result as dumb. Then, run the following command:
$ export TERM=dumb
This will resolve the error.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 12Internal
Docker stats command
sudo docker stats <yourContainerID>
sudo docker stats <yourContainerID1> <yourConatinerID2>
Docker stats command provides you with the capability to view the memory, CPU, and the network usage of a container from a Docker host machine. Docker provides you access to container statisticsread only parameters. This will streamline the CPU, memory, network IO, and block IO of your containers. This helps you choose the resource limits and also in profiling. The Docker stats utility provides you with these resource usage details only for running containers.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 13Internal
Docker events command
sudo docker pause <yourContainerID>
sudo docker ps –a
sudo docker unpause <yourContainerID>
sudo docker ps –a
Docker containers will report the following real-time events: create, destroy, die, export, kill, omm ,pause, restart, start, stop, and unpause.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 14Internal
Docker logs command
sudo docker logs <yourContainerID>
sudo docker logs –t <yourContainerID>
This command fetches the log of a container without logging into the container. It batch-retrieves logs present at the time of execution. These logs are the output of STDOUT and STDERR. The general usage is shown in docker logs [OPTIONS] CONTAINER.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 15Internal
Docker Debugging Summary
Docker exec will allow you to log in to the container without running an SSH daemon in the container.
Docker stats provides information about the container's memory and CPU usage.
Docker events reports the events, such as create, destroy, kill, and so on.
Docker logs fetch the logs from the container without logging into the container.
Debugging is the foundation that can be used to strategize other security vulnerabilities and holes.
Use this title slide only with an image
Docker Part 10 Volume MngmtDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 17Internal
Working with Volumes
• Create containers holding volumes.
• Share volumes across containers.
• Share a host directory with one or many containers.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 18Internal
Exploring Docker Volumes
Docker volumes can be used to achieve many things, including:
• Bypassing the copy-on-write system to obtain native disk I/O performance.
• Bypassing copy-on-write to leave some files out of docker commit.
• Sharing a directory between multiple containers.
• Sharing a directory between the host and a container.
• Sharing a single file between the host and a container.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 19Internal
Volume are special directories in a container
Volumes can be declared in two different ways.
Within a Dockerfile, with a VOLUME instruction
VOLUME /var/lib/postgresql
On the command-line, with the -v flag for docker run.
$ docker run –d –v /var/lib/postgresql \training/postgresql
In both cases, /var/lib/postgresql (inside the container) will be a volume.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 20Internal
Volume bypass the copy-on-write system
Volumes act as passthroughs to the host filesystem.
• The I/O performance on a volume is exactly the same as I/O performance on
the Docker host.
• When you docker commit, the content of volumes is not brought into the
resulting image.
• If a RUN instruction in a Dockerfile changes the content of a volume, those
changes are not recorded neither.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 21Internal
Volumes can be shared across containers
You can start a container with exactly the same volumes as another one.
The new container will have the same volumes, in the same directories.
They will contain exactly the same thing, and remain in sync.
Under the hood, they are actually the same directories on the host anyway.
This is done using the --volumes-from flag for docker run. In another terminal, let's start another container with the same volume.
$ docker run –it –name alpha –v /var/log Ubuntu bash
$ docker run –volumes-from alpha Ubuntu cat /var/log/now
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 22Internal
Volumes exist independently of conatiners
If a container is stopped, its volumes still exist and are available.
In the last example, it doesn't matter if container alpha is running or not.
Use this title slide only with an image
Docker Part 11 Docker HubDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 24Internal
Introducing Docker Hub
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 25Internal
Introducing Docker Hub
At the end of this lesson, you will be able to:
• Register for an account on Docker Hub.
• Login to your account from the command line.
• Learn about how Docker Hub works.
• Learn about how to integrate Docker Hub into your development workflow.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 26Internal
Sign up / Activation for a Docker Hub account
Note: if you already have an account on the Index/Hub, don't create another one.
• Having a Docker Hub account will allow us to store our images in the registry.
• To sign up, you'll go to hub.docker.com and fill out the form.
• Note: your Docker Hub username has to be all lowercase.
• Check your e-mail and click the confirmation link.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 27Internal
Login !
$ docker login
Let's use our new account to login to the Docker Hub!
Our credentials will be stored in ~/.dockercfg.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 28Internal
The .dockercfg configuration file
The ~/.dockercfg configuration file holds our Docker registry authentication credentials.
The auth section is Base64 encoding of your user name and password.
It should be owned by your user with permissions of 0600.
You should protect this file!
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 29Internal
Repositories
• Store all public and private images in the registry
• Apply to your namespace
• Empty!
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 30Internal
Public Repositories
• Docker Hub provides access to tens of thousands of pre-made images that you can build from.
• Some of these are official builds and live in the root namespace.
• Most are community contributed and maintained.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 31Internal
Official Repositories
• Are maintained by the product owners
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 32Internal
New Repository
Pull down Add Repository menu and select Repository
• Leave namespace at the default (your username)
• Give your repository a name
• Type a brief description so people know what it is
• Leave Public selected
• Submit the form with Add Repository button (not shown)
Click Repositories and you will see your new repository.
• You can push images to this repository from the docker commandline.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 33Internal
Repository Settings
You can change the following:
• Repository Description
• Webhooks
• Collaborators
• Mark as unlisted in the global search (NOT a private repository)
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 34Internal
Collaborators
You can invite other Docker Hub to collaborate on your projects.
• Collaborators cannot change settings in the repository.
• Collaborators can push images to the repository.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 35Internal
Webhooks
• Notify external applications that an image has been uploaded to the repository.
• Powerful tool for integrating with your development workflow.
• Even more powerful when used with Automated Builds.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 36Internal
Automated Builds
• Automatically build an image when source code is changed.
• Integrated with Github and Bitbucket
• Work with public and private repositories
• Add the same as a regular repository, select Automated Build from the Add
Repository menu
• We'll set one of these up later!
• You will need a Github account to follow along later, so go ahead and create one now if you don't have one yet.
Use this title slide only with an image
Docker Part 12 Data ContainerDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 38Internal
Data containers
$ docker run –name wwwdata –v /var/lib/www busybox true
$ docker run –name wwwlogs –v /var/log/www busybox true
A data container is a container created for the sole purpose of referencing one (or many) volumes.It is typically created with a no-op command.
We created two data containers.
• They are using the busybox image, a tiny image.
• We used the command true, possibly the simplest command in the world!
• We named each container to reference them easily later.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 39Internal
Using data containers
$ docker run –d –volumes-from wwwdata –volumes-from wwwlogs webserver
$ docker run –d –volumes-from wwwdata ftpserver
$ docker run –d –volumes-from wwwlogs pipestash
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 40Internal
Using data containers
Data containers are used by other containers thanks to --volumes-from.
Consider the following (fictitious) example, using the previously created volumes:
• The first container runs a webserver, serving content from /var/lib/www and logging to /var/log/www.
• The second container runs a FTP server, allowing to upload content to the same /var/lib/www path.
• The third container collects the logs, and sends them to logstash, a log storage and analysis system.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 41Internal
Managing volumes yourself
In some cases, you want a specific directory on the host to be mapped inside the container:
• You want to manage storage and snapshots yourself.
(With LVM, or a SAN, or ZFS, or anything else!)
• You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk.
• You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed).
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 42Internal
Sharing a directory between the host and a container
$ cd
$ mkdir bindthis
$ ls bindthis
$ docker run –it –v $(pwd)/bindthis:/var/www/html/webapp Ubuntu bash
$ ls bindthis
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 43Internal
Sharing a directory between the host and a container
This will mount the bindthis directory into the container at /var/www/html/webapp.
Note that the paths must be absolute.
It defaults to mounting read-write but we can also mount read-only.
$ docker run –it –v $(pwd)/bindthis:/var/www/html/webapp:ro ubuntu bash
Those volumes can also be shared with --volumes-from.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 44Internal
Chaining container volumes together
Create an initial container
$ docker run -it –v /var/appvolume \ --name appdata Ubuntu bash
Create some data in our data volume
cd /var/appvolume
Echo “Hello” > data
Exit container
exit
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 45Internal
Use a data volume from our container.
Create a new container
$ docker run –it –volumes-from appdata \ --name appserver1 Ubuntu bash
Let’s view our data
cat /var/appvolume/data
Let’s make a change to ur data
Echo “ Good bye” \ >> /var/appvoume/data
Exit container
exit
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 46Internal
Chain containers with data volumes
Create a third container
$ docker run –it –volumes-from appserver1 –name appserver2 Ubuntu bash
Lets view our data
cat /var/appvolume/data
Exit container
exit
Tidy up your containers
$ docker rm –v appdata appserver1 appserver2
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 47Internal
What happens when you remove containers with volumes?
• As long as a volume is referenced by at least one container, you will be able to access it.
• When you remove the last container referencing a volume, that volume will be orphaned.
• Orphaned volumes are not deleted (as of Docker 1.2).
• The data is not lost, but you will not be able to access it
. (Unless you do some serious archeology in /var/lib/docker.)
Ultimately, you are the one responsible for logging, monitoring, and backup of your volumes.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 48Internal
Checking volumes defined by an image
Wondering if an image has volumes? Just use docker inspect:
$ docker inspect training/datavol
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 49Internal
Checking volumes used by a container
To look which paths are actually volumes, and to what they are bound, use docker inspect (again):
$ docker inspect <yourContainerID>
We can see that our volume is present on the file system of the Docker host.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 50Internal
Sharing a single file between the host and a container
The same -v flag can be used to share a single file.
$ echo 4815162342 > /tmp/numbers
$ docker run –it –v /tmp/numbers:/numbers Ubuntu bash
cat /numbers
It can also be used to share a socket.
$ docker run –it –v /var/run/docker.sock:/docker.sock Ubuntu bash
This pattern is frequently used to give access to the Docker socket to a given container.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 51Internal
Connecting containers
We're going to get two images: a Redis (key-value store) image and a Ruby on
Rails application image.
• We're going to start containers from each image.
• We're going to link the container running our Rails application and the container
running Redis using Docker's link primitive.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 52Internal
Redis database image
$ docker pull redis:latest
$ docker images redis
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 53Internal
Launch a container from the redis image.
$ docker run –d –name mycache redis
Let’s check container is running:
$ docker ps –l
Our container is launched and running an instance of Redis.
• Using the --name flag we've given it a name: mycache. Remember that!
Container names are unique. We're going to use that name shortly.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 54Internal
Rails application image
$ docker pull nathanleclaire/redisonrails
And reviewing it .
$ docker images nathanleclaire/redisonrails
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 55Internal
The nathanleclaire/redisonrails Dockerfile in detail
• Based on the ruby base image from Docker Hub (provided by Docker Inc.)
• Installs the required packages with bundle install.
• Adds the Rails application itself to the /myapp directory.
• Exposes port 3000.
• Runs Ruby on Rails when a container is launched from the image.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 56Internal
Connecting to redis in the container
The following Ruby code will be used in /myapp/config/initializers/
redis.rb to connect to the running Redis container.
$redis = Redis.new(:host => 'redis', :port => 6379)
As we'll see in more detail later, Links provide a DNS entry for the linked container as
well as information about how to connect (IP address, ports, etc.) populated in
environment variables.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 57Internal
Launch a container from the nathanleclaire/redisonrails image.
Let's launch a container from the nathanleclaire/redisonrails image, without links to start.
In the Rails console we can see that $redis exists, but we did not link to any actual Redis instance.
$ docker run –it nathaneclaire/redisonrails rails console
Without access to a Redis server at the proper location the initialized $redis object will not work.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 58Internal
Launch and link a container
Let's try again but this time we'll link our container to our existing Redis container.
$ docker run –it --link mycache:redis \ nathanleclaire/redisonrails rails console
• The --link flag connects one container to another.
• We specify the name of the container to link to, mycache, and an alias for the link, redis, in the format name:alias.
• We can use $redis in an ActiveRecord class to create data models that have the speed on in-memory lookups.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 59Internal
Under the hood !
60© 2015 SAP SE or an SAP affiliate company. All rights reserved.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 61Internal
62© 2015 SAP SE or an SAP affiliate company. All rights reserved.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 63Internal
Before Q & A …………….What next ?
Try all the 5 use cases on monsoon !
Use this title slide only with an image
Docker Part 13 SwarmDawood Sayyed/GLDS February 10 , 2016 Internal
Use this title slide only with an image
Docker Part 14 PaaSDawood Sayyed/GLDS February 10 , 2016 Internal
Use this title slide only with an image
Docker Part 15 CaaSDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 67Internal
Apache Spark
Apache Spark is a data processing engine for large data sets. Apache Spark is much faster (up to 100times faster in memory) than Apache Hadoop Map Reduce. In cluster mode, Spark applications run as independent processes coordinated by the Spark Context object in the driver program, which is the main program. The Spark Context may connect to several types of cluster managers to allocate resources to Spark applications. The supported cluster managers include the Standalone cluster manager, Mesos and YARN. Apache Spark is designed to access data from varied data sources including the HDFS, Apache HBase and NoSQL databases such as Apache Cassandra and MongoDB. In this chapter we shall use the same CDH .Docker image that we used for several of the Apache Hadoop frameworks including Apache Hive and Apache HBase. We shall run an Apache Spark Master in cluster mode using the YARN cluster manager in a Docker container.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 68Internal
Setting Enviroment
Setting the Environment
Running the Docker Container for CDH
Running Apache Spark Job in yarn-cluster Mode
Running Apache Spark Job in yarn-client Mode
Running the Apache Spark Shell
Use this title slide only with an image
Docker Part 16 CI/CDDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 70Internal
Use Case 3 (Running Jenkins Slave and Master on Docker )
Continuous Integration ( CI) with Dockers and Jenkins
docker pull jenkins
docker run -d-i-t -p 8086:8080 jenkins
docker run -name masterjenkins -d-i-t -p 8086:8080 jenkins
docker run –name slavejenkins -d-i-t -p 8087:8080 jenkins
docker run -name slavejenkins1 -d-i-t -p 8088:8080 jenkins
Use this title slide only with an image
Docker Part 17 ComposeDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 72Internal
Setting up Docker Compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.
Install curl because it's not installed by default
sudo apt-get install curl
Download Docker Compose for xubuntu (Linux)
curl –L
https://github.com/docker/compose/releases/download/1.4.2/docker-compose-Linux-x86_64 > /tmp/docker-compose
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 73Internal
Steps for running Docker Compose
Compose is great for development, testing, and staging environments, as well as CI workflows.
Using Compose is basically a three-step process.
1) Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
2) Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
3) Lastly, run docker-compose up and Compose will start and run your entire app.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 74Internal
What is Docker Compose ?
The docker-compose tool is a very simple, yet powerful tool and has been conceived and concretized to facilitate the running of a group of Docker containers. In other words, docker-compose is an orchestration framework that lets you define and control a multi-container service. It enables you to create a fast and isolated development environment as well as the ability to orchestrate multiple Docker containers in production. The docker-compose tool internally leverages the Docker engine for pulling images, building the images ,starting the containers in a correct sequence, and making the right connectivity/linking among the containers/services based on the definition given in the docker-compose.yml file.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 75Internal
Installation for Docker Compose
Using the wget tool:
$ sudo sh -c 'wget -qOhttps://github.com/docker/compose/releases/download/1.2.0/docker-compose-'uname -s'-'uname -m' > /usr/local/bin/dockercompose;chmod +x/usr/local/bin/docker-compose‘
Using the curl tool:
$ sudo sh -c 'curl -sSLhttps://github.com/docker/compose/releases/download/1.2.0/docker-compose-'uname -s'-'uname -m' > /usr/local/bin/dockercompose;chmod +x/usr/local/bin/docker-compose‘
$ sudo pip install -U docker-compose
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 76Internal
Docker-Compose.yml file
The docker-compose.yml file is a YAML Ain't Markup Language (YAML) format file, which is ahuman-friendly data serialization format. The default docker-compose file is docker-compose.yml, which can be changed using the -f option of the docker-compose tool.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 77Internal
Docker-Compose command
The docker-compose tool provides sophisticated orchestration functionality with a handful of commands. All the docker-compose commands use the docker-compose.yml file as the base to orchestrate one or more services.The following is the syntax of the docker-compose command:docker-compose [<options>] <command> [<args>...]
The docker-compose tool supports the following options:
• --verbose: This shows more output
• --version: This prints the version and exits
• -f, --file <file>: This specifies an alternate file for docker-compose
• -p, --project-name <name>: This specifies an alternate project name
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 78Internal
Docker-Compose command
The docker-compose tool supports the following commands:
• build: This builds or rebuilds services
• kill: This kills containers
• logs: This displays the output from the containers
• port: This prints the public port for a port binding
• ps: This lists the containers
• pull: This pulls the service images
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 79Internal
Docker-compose command
• rm: This removes the stopped containers
• run: This runs a one-off command
• scale: This sets a number of containers for a service
• start: This starts services
• stop: This stops services
• up: This creates and starts containers
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 80Internal
Docker –Compose Use Case 6
• Redis: This is a key-value database used to store a key and its associated value
• Node.js: This is a JavaScript runtime environment used to implement web server functionality as well as the application logic
Each of these services is packed inside two different containers that are stitched together using the docker-compose tool.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 81Internal
Docker –Compose
1. The docker-compose commands must be executed from the directory in which the docker-compose.yml file is stored. The docker-compose tool considers each docker-compose.yml file as a project, and it assumes the project name from the docker-compose.yml file's directory. Of course, thiscan be overridden using the -p option. So, as a first step, let's change the directory, wherein the docker-compose.yml file is stored:
$ cd ~/example
2. Build the services using the docker-compose build command:
$ sudo docker-compose build
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 82Internal
Docker -Compose
3. Proceed to bring up the services as indicated in the docker-compose.yml, file using the docker-compose up command:
$ sudo docker-compose up
4. Having successfully orchestrated the services using the docker-compose tool, let's invoke the docker-compose ps command from a different terminal to list the containers associated with the example docker-compose project:
$ sudo docker-compose ps
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 83Internal
Docker -Compose
5. Explore the functionality of our own request/response web application on a
different terminal of the Docker host, as illustrated here:
$ curl http://0.0.0.0:8080
Enter the docker-compose command in the URL for help
$ curl http://0.0.0.0:8080/build
With very minimal effort, and the help of the docker-compose.yml file, we are able to compose two different services together and offer a composite service.
Use this title slide only with an image
Docker Part 18 Security Dawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 85Internal
What can we do with Docker API access?
Someone who has access to the Docker API will have full root privileges on the Docker host.
If you give root privileges to someone, assume that they can do anything they like to host,including:
• Accessing all data.
• Changing all data.
• Creating new user accounts and changing passwords.
• Installing stealth rootkits.
• Shutting down the machine.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 86Internal
Accessing the host filesystem
To do that, we will use -v to expose the host filesystem inside a container:
$ docker run –v/:/hostfs Ubuntu cat /hostfs/etc/passwd
If you want to explore freely the host filesystem:
$ docker run –it –v /:/hostfs –w /hostfs Ubuntu bash
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 87Internal
Modifying the host filesystem
Volumes are read-write by default, so let's create a dummy file on the host filesystem:
$ docker run –it –v /:/hostfs Ubuntu touch /hostfs/hi-there
$ ls –l /
Note: if you are using boot2docker or a remote Docker host, you won't see the hithere
file. It will be in the boot2docker VM, or on the remote Docker host instead.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 88Internal
Privileged containers
If you start a container with --privileged, it will be able to access all devices and
perform all operations.
For instance, it will be able to access the whole kernel memory by reading (and even
writing!) /dev/kcore.
A container could also be started with --net host and --privileged together,
and be able to sniff all the traffic going in and out of the machine.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 89Internal
Other harmful operations
We won't explain how to do this (because we don't want you to break your Docker
machines), but with access to the Docker API, you can:
• Add user accounts.
• Change password of existing accounts.
• Add SSH key authentication to existing accounts.
• Insert kernel modules.
• Run malicious processes and insert special kernel code to hide them.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 90Internal
What to do?
• Do not expose the Docker API to the general public.
• If you expose the Docker API, secure it with TLS certificates.
• TLS certificates will be presented in the next section.
• Make sure that your users are trained to not give away credentials
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 91Internal
Security of containers themselves
• "Containers Do Not Contain!"
• Containers themselves do not have security features.
• Security is ensured by a number of other mechanisms.
• We will now review some of those mechanisms.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 92Internal
Do not run processes as root
• By default, Docker runs everything as root.
• This is a security risk.
• Docker might eventually drop root privileges automatically, but until then, you
should specify USER in your Dockerfiles, or use su or sudo.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 93Internal
Don't colocate security-sensitive containers
• If a container contains security-sensitive information, put it on its own Docker host, without other containers.
• Other containers (private development environments, non-sensitive applications...) can be put together.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 94Internal
Run AppArmor or SELinux
• Both of these will provide you with an additional layer of protection if an attacker is able to gain elevated access.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 95Internal
Securing Docker with TLS / Why should I care?
• Understand how Docker uses TLS to secure and authorize remote clients
• Create a TLS Certificate Authority
• Create TLS Keys
• Sign TLS Keys
• Use these keys with Docker
• Docker does not have any access controls on its network API unless you use TLS!
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 96Internal
What is TLS
• TLS is Transport Layer Security.
• The protocol that secures websites with https URLs.
• Uses Public Key Cryptography to encrypt connections.
• Keys are signed with Certificates which are maintained by a trusted party.
• These Certificates indicate that a trusted party believes the server is who it says it is.
• Each transaction is therefor encrypted and authenticated
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 97Internal
How Docker Uses TLS
• Docker provides mechanisms to authenticate both the server the client to each other.
• Provides strong authentication, authorization and encryption for any API connection over the network.
• Client keys can be distributed to authorized clients
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 98Internal
Environment Preparation
• You need to make sure that OpenSSL version 1.0.1 is installed on your machine.
• Make a directory for all of the files to reside.
• Make sure that the directory is protected and backed up!
• Treat these files the same as a root password
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 99Internal
Creating a Certificate Authority
First, initialize the CA serial file and generate CA private and public keys:
We will use the ca.pem file to sign all of the other keys later.
$ echo 01 > ca.srl
$ openssl genrsa –des3 –out ca-key.pem 2048
$ openssl req –new –x509 –days 365 –key ca-key.pem –out ca.pem
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 100Internal
Create and Sign the Server Key
Now that we have a CA, we can create a server key and certificate signing request. Make sure that CN matches the hostname you run the Docker daemon on:
$ openssl genrsa -des3 -out server-key.pem 2048
$ openssl req -subj '/CN=**<Your Hostname Here>**' -new -key server-key.pem –out server.csr
$ openssl rsa -in server-key.pem -out server-key.pem
Next we're going to sign the key with our CA:
$ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \ -out server-cert.pem
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 101Internal
Create and Sign the Client Key
$ openssl genrsa -des3 -out client-key.pem 2048
$ openssl req -subj '/CN=client' -new -key client-key.pem -out client.csr
$ openssl rsa -in client-key.pem -out client-key.pem
To make the key suitable for client authentication, create a extensions config file:
$ echo extendedKeyUsage = clientAuth > extfile.cnf
$ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \ -out client-cert.pem -extfile extfile.cnf
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 102Internal
Configuring the Docker Daemon for TLS
• By default, Docker does not listen on the network at all.
• To enable remote connections, use the -H flag.
• The assigned port for Docker over TLS is 2376.
$ sudo docker -d --tlsverify
--tlscacert=ca.pem --tlscert=server-cert.pem
--tlskey=server-key.pem -H=0.0.0.0:2376
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 103Internal
Configuring the Docker Client for TLS
If you want to secure your Docker client connections by default, you can move the key files to the .docker directory in your home directory. Set the DOCKER_HOST variable as well.
$ cp ca.pem ~/.docker/ca.pem
$ cp client-cert.pem ~/.docker/cert.pem
$ cp client-key.pem ~/.docker/key.pem
$ export DOCKER_HOST=tcp://:2376
Then you can run docker with the --tlsverify option.
$ docker --tlsverify ps
Use this title slide only with an image
Docker Part 19 API Dawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 105Internal
Docker API
• Work with the Docker API.
• Create and manage containers with the Docker API.
• Manage images with the Docker API.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 106Internal
Introduction to the Docker API
So far we've used Docker's command line tools to interact with it. Docker also has a
fully fledged RESTful API you can work with.
The API allows:
• To build images.
• Run containers.
• Manage containers.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 107Internal
Docker API details
The Docker API is:
• Broadly RESTful with some commands hijacking the HTTP connection for STDIN, STDERR, and STDOUT.
• The API binds locally to unix:///var/run/docker.sock but can also be bound to a network interface.
• Not authenticated by default.
• Securable with certificates.
In the examples below, we will assume that Docker has been setup so that the API
listens on port 2375, because tools like curl can't talk to a local UNIX socket directly.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 108Internal
Testing the Docker API
Let's start by using the info endpoint to test the Docker API.
This endpoint returns basic information about our Docker host.
$ curl --silent -X GET http://localhost:2375/info \ | python -mjson.tool
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 109Internal
Doing docker run via the API
It is simple to do docker run with the CLI, but it is more complex with the API. It involves multiple calls.
We will focus on detached containers for now (i.e., running in the background).
Interactive containers involve hijacking the HTTP connection. This is easily handled with
Docker client libraries, but for now, we will use regular tools like curl.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 110Internal
Container lifecycle with the API
To run a container, you must:
• Create the container. It is then stopped, but ready to go.
• Start the container.
• Optionally, you can wait for the container to exit.
• You can also retrieve the container output (logs) with the API.
Each of those operations corresponds to a specific API call.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 111Internal
"Create" vs. "Start"
The create API call creates the container, and gives us the ID of the newly created container. The container does not run yet, though.
The start API call tells Docker to transition the container from "stopped" to "running".
Those are two different calls, so you can attach to the container before starting it, to make sure that you will not miss any output from the container, for instance.
Some parameters (e.g. which image to use, memory limits) must be specified with
create; others (e.g. ports and volumes mappings) must be specified with start.
To see the list of all parameters, check the API reference documentation.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 112Internal
Creating a new container via the API
Let's use curl to create a simple container
• You can see the container ID returned by the API.
• The Cmd parameter has to be a list.
(If you put echo hello world it will try to execute a binary called echo
hello world.)
• You can add more parameters in the JSON structure.
• The only mandatory parameter is the Image to use.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 113Internal
Starting our new container via the API
In the previous step, the API gave you a container ID.
You will have to copy-paste that ID.
$ curl -X POST -H 'Content-Type: application/json' \
http://localhost:2375/containers/<yourContainerID>/start \
-d
No output will be shown (unless an error happens).
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 114Internal
Inspecting our launched container
We can also inspect our freshly launched container
$ curl --silent \
http://localhost:2375/containers/<yourContainerID>/json |
python -mjson.tool
• It returns the same hash the docker inspect command returns
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 115Internal
Waiting for our container to exit and check its status code
Our test container will run and exit almost instantly.
But for containers running for a longer period of time, we can call the wait endpoint.
The wait endpoint also gives the exit status of the container.
$ curl --silent -X POST \
http://localhost:2375/containers/<yourContainerID>/wait
• Note that you have to use a POST method here.
• The StatusCode of 0 means that the process exited normally, without error
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 116Internal
Viewing container output (logs)
Our container is supposed to echo hello world.
Let's verify that.
$ curl --silent \
http://localhost:2375/containers/<yourContainerID>/logs?stdout=1
• There are other options, to select which streams to see (stdout and/or stderr), whether or not to show timestamps, and to follow the logs (like tail -f does).
• Check the API reference documentation to see all available options.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 117Internal
Stopping a container
We can also stop a container using the API.
$ curl --silent -X POST \
http://localhost:2375/containers/<yourContainerID>/stop
• Note that you have to use a POST call here.
• If it succeeds it will return a HTTP 204 response code.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 118Internal
Working with images
$ curl -X GET http://localhost:2375/images/json?all=0
• Returns a hash of all images.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 119Internal
Searching the Docker Hub for an image
We can also search the Docker Hub for specific images.
$ curl -X GET http://localhost:2375/images/search?term=training
This returns a list of images and their metadata.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 120Internal
Creating an image
We can then add one of these images to our Docker host.
$ curl -i -v -X POST \
http://localhost:2375/images/create?fromImage=training/namer
{"status":"Pulling repository training/namer"}
This will pull down the training/namer image and add it to our Docker host.
Use this title slide only with an image
Docker Part 20 Openstack Dawood Sayyed/GLDS February 10 , 2016 Internal
Use this title slide only with an image
Docker Part 21 Monitoring Dawood Sayyed/GLDS February 10 , 2016 Internal
Use this title slide only with an image
Docker Part 22 DIEGO CFDawood Sayyed/GLDS February 10 , 2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 124Internal
Containers on Openstack
Magnum is an OpenStack API service developed by the OpenStack Containers Team making container orchestration engines such as Docker and Kubernetes available as first class resources in OpenStack. Magnum uses Heat to orchestrate an OS image which contains Docker and Kubernetes and runs that image in either virtual machines or bare metal in a cluster configuration
1) How is Magnum is different from Nova?
Magnum provides a purpose built API to manage application containers, which have a distinctly different life cycle and operations than Nova (machine) Instances. We actually use Nova instances to run our application containers.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 125Internal
Containers on Openstack
2) How is Magnum different than Docker or Kubernetes?
Magnum offers an asynchronous API that's compatible with Keystone, and a complete multi-tenancy implementation. It does not perform orchestration internally, and instead relies on OpenStack Orchestration. Magnum does leverage both Kubernetes and Docker as components.
3) Is this the same thing as Nova-Docker?
No, Nova-Docker is a virt driver for Nova that allows containers to be created as Nova instances. This is suitable for use cases when you want to treat a container like a lightweight machine. Magnum provides container specific features that are beyond the scope of Nova's API, and implements its own API to surface these features in a way that is consistent with other OpenStack services. Containers started by Magnum are run on top of Nova instances that are created using Heat.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 126Internal
Containers on Openstack
4) Who is Magnum for?
Magnum is for OpenStack cloud operators (public or private) who want to offer a self-service solution to provide containers to their cloud users as a managed hosted service. Magnum simplifies the required integration with OpenStack, and allows for cloud users who can already launch cloud resources such as Nova Instances, Cinder Volumes, Trove Databases, etc. to also create application containers to run applications in an environment that provides advanced features that are beyond the scope of existing cloud resources. The same identity credentials used to create IaaS resources can be used to run containerized applications using Magnum. Some examples of advanced features available with Magnum are the ability to scale an application to a specified number of instances, to cause your application to automatically re-spawn an instance in the event of a failure, and to pack applications together more tightly than would be possible using Virtual Machines.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 127Internal
Containers on Openstack
5) Will I get the same thing if I use the Docker resource in Heat?
No, the Docker Heat resource does not provide a resource scheduler, or a choice of container technology used. It is specific to Docker, and uses Glance to store container images. It does not currently allow for layered image features, which can cause containers to take longer to start than if layered images are used with a locally cached base image. Magnum leverages all of the speed benefits that Docker offers.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 128Internal
Containers on Openstack
6) What does multi-tenancy mean in Magnum (Is Magnum Secure)?
Resources such as Containers, Services, Pods, Bays, etc. started by Magnum can only be viewed and accessed by users of the tenant that created them. Bays are not shared, meaning that containers will not run on the same kernel as neighboring tenants. This is a key security feature that allows containers belonging to the same tenant to be tightly packed within the same Pods and Bays, but runs separate kernels (in separate Nova Instances) between different tenants. This is different than using a system like Kubernetes without Magnum, which is intended to be used only by a single tenant, and leaves the security isolation design up to the implementer. Using Magnum provides the same level of security isolation as Nova provides when running Virtual Machines belonging to different tenants on the same compute nodes.
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 129Internal
Beware.... Next Big thing is coming here for IOT ,Edge Smart data
“ Good folk’s have found you in Star-Wars Episode 6 if used Light weight Docker”…Adage from Yoda to Luke Skywalker
© 2015 SAP SE or an SAP affiliate company. All rights reserved.
Thank youContact information:
Dawood SayyedDawood.sayyed@sap.comDocker Part 1
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 133Internal
Agenda
What is What in our use cases ?
Container vs VM’s
KVM vs LXC
LXC vs Docker Why Docker ?Docker Architecture Installing Docker Use casesIntroduction to SDN with Docker Hadoop Use case with DockerFuture Topics !
Use this title slide only with an image
Docker Part 23 HANA DBDawood Sayyed/GLDS April 18 ,2016 Internal
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 135Internal
Prerequisite for HANA DB container
1) Access to JFROG Artifactory
2) Monsoon with more than 25 GB
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 136Internal
Docker file for HDB
$ mkdir HBD
$ vim dockerfile
FROM docker.wdf.sap.corp:51002/centos
MAINTAINER Dawood Sayyed Dawood.sayyed@sap.com
ENV number=00 sid=LHA sapadm_password=Abcd1234 password=Abcd1234 system_user_password=Abcd1234 SOURCE=/mnt config=/hana_install.cfg build –t
$ docker build –t HBD .
$ docker images
© 2015 SAP SE or an SAP affiliate company. All rights reserved. 137Internal
Docker file for HDB
$ cat dockerfile_hdb
FROM docker.wdf.sap.corp:51002/sles:11u3
MAINTAINER DPSO_GLDS dpso_glds@sap.com
RUN zypper –non-interactive install –auto-agree-with-licenses syslog-ng libaio libnuma1 liblt17
ENV number=00 sid=LHA sapadm_password=Abcd1234 password=Abcd1234 system_user_password=Abcd1234 SOURCE=/mnt config=/hana_install.cfg
COPY run_hana.sh /
COPY hana_install.cfg /
EXPOSE 30015 50013 1129 6379 30013
CMD /run hana.sh
Use this title slide only with an image
Docker Part 24 HDB/KuberneteDawood Sayyed/GLDS April 18 ,2016 Internal
top related