openstack: inside out

69
1 Copyright (C) 2014 National Institute of Informatics, All rights reserved. OpenStack: Inside Out Etsuji Nakai Senior Solution Architect Red Hat ver1.0 2014/02/22

Upload: etsuji-nakai

Post on 06-May-2015

6.330 views

Category:

Technology


2 download

TRANSCRIPT

Page 1: OpenStack: Inside Out

1

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

OpenStack: Inside Out

Etsuji NakaiSenior Solution Architect

Red Hat

ver1.0 2014/02/22

Page 2: OpenStack: Inside Out

2

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

$ who am i

- The author of “Professional Linux Systems” series.● Available in Japanese/Korean. Translation offering from publishers are welcomed ;-)

Self-study LinuxDeploy and Manage by yourself

Professional Linux SystemsDeployment and Management

Professional Linux SystemsNetwork Management

■ Etsuji Nakai- Senior solution architect

and Cloud evangelist at Red Hat.

Professional Linux SystemsTechnology for Next Decade

Page 3: OpenStack: Inside Out

3

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Contents■ Overview of OpenStack

■ Major components of OpenStack

■ Internal architecture of Nova and Cinder

■ Architecture overview of Neutron

■ Internal architecture of LinuxBridge plugin

■ Internal architecture of Open vSwitch plugin

■ Configuration steps of virtual network

Note: Use of RDO (Grizzly) is assumed in this document.

Page 4: OpenStack: Inside Out

4

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Overview of OpenStack

Page 5: OpenStack: Inside Out

5

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Computing resources in OpenStack cloud■ The end-users of OpenStack can create and configure

the following computing resources in their private tenants through web console and/or APIs.

- Virtual Network

- VM Instances

- Block volumes

User Data Block volumes

Virtual Router

Virtual Switches

External Network

Project Environment

OpenStack User

OS

■ Each user belongs to some projects.- Users in the same project shares

the common computing resources in their project environment.

- Each project owns (virtual) computing resources which are independent of other projects.

VM Instances

Page 6: OpenStack: Inside Out

6

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Logical view of OpenStack virtual network■ Each tenant has its own virtual router which works like "the broadband router in

your home network."

- Tenant users add virtual switches behind the router and assign private subnet addresses to them. It's possible to use overlapping subnets with other tenants.

■ When launching an instance, the end-user selects virtual switches to connect it.

- The number of virtual NICs of the instance corresponds to the number of switches to connect. Private IPs are assigned via DHCP.

Virtual switch192.168.101.0/24

Virtual routerfor tenant A

External network

Virtual routerfor tenant B

Virtual switch192.168.102.0/24

Page 7: OpenStack: Inside Out

7

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Private IP and Floating IP■ When accessing form the external network, "Floating IP" is attached to the VM

instance.

- A range of IP addresses of the external network which can be used as Floating IP are pooled and distributed to each tenant in advance.

- Floating IP is NAT-ed to the corresponding Private IP on the virtual router.

- Accessing from VM instance to the external network is possible without assigning Floating IP. IP masquerade feature of the virtual router is used in this case.

Web Server DB Server

Private IP Private IP

Floating IP

Connecting from the externalnetwork with Floating IP

Connecting between VM instanceswith Private IP

Page 8: OpenStack: Inside Out

8

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

VM instance creation■ When launching a new VM instance, the following options should be specified.

- Instance type (flavor)

- Template image

- Virtual network

- Security group

- Key pair

External network

OSTemplate

image

Download

It's possible toconnect multiple

networks.

Security Group

Format Description

raw Flat image file

AMI/AKI/ARI Used with Amazon EC2

qcow2 Used with Linux KVM

VDI Used with VirtualBox

VMDK Used with VMware

VHD Used with Hyper-V

Supported import image format

Page 9: OpenStack: Inside Out

9

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Key pair authentication for SSH connection■ A user registers his/her public key in advance. It's injected to guest OS when

launching a new instance.

- Key pairs are registered for each user. They are not shared with multiple users.

User information database

VM instance

(2) Public key is injected to guest OS.

Secret key

Public key

(1) Register public key in advance.

(3) Authenticate with secret key.

Page 10: OpenStack: Inside Out

10

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Instance types and corresponding disk areas■ The following is the list of instance types created by default.

- The root disk is extended to the specified size after being copied from the template image (except m1.tiny).

■ The admin users can define new instance types.

- The following is an example of using temp disk and swap disk.

- Since these disk are discarded when the instance is destroyed, persistent data should be stored in different places, typically in block volumes.

Instance type (flavor) vCPU Memory root

disktempdisk

swapdisk

m1.tiny 1 512MB 0GB 0 0

m1.small 1 2GB 20GB 0 0

m1.medium 2 4GB 40GB 0 0

m1.large 4 4GB 80GB 0 0

m1.xlarge 8 8GB 160GB 0 0

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTvda 252:0 0 20G 0 disk └─vda1 252:1 0 20G 0 part /vdb 252:16 0 5G 0 disk /mntvdc 252:32 0 1G 0 disk [SWAP] swap disk

root disktemp disk

Page 11: OpenStack: Inside Out

11

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Snapshot of VM instances■ By taking snapshot of running instances, you can copy the root disk and reuse it as

a template image.

OS

Template image

Launch an instancefrom a template image.

Instance snapshot

OS

Create a snapshot whichis a copy of the root disk.

Launch an instancefrom a snapshot.

Page 12: OpenStack: Inside Out

12

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Block volume as persistent data store

User Data

(3) Create a snapshot

(4) Create a new block volumefrom the snapshot.

(2) Attach to a running instanceto store user data.

User Data

It can be re-attached toanother instance.

(1) Create a new block volume.

OS OS

■ Block volumes remain undeleted after destroying a VM instance. It can be used as a persistent data store.

Page 13: OpenStack: Inside Out

13

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Boot from block volume

Boot an instance directlyfrom block volume.

OS

Create a snapshot.

OS

Copy

■ It's possible to copy a template image to a new block volume to create a bootable block volume.

- When booting from block volume, the contents of guest OS remain undeleted even when the instance is destroyed.

- You can create a snapshot of the bootable volume, and create a new bootable volume from it when launching a new instance.

OS

Create a block volumefrom a template image.

OSTemplate

image

Page 14: OpenStack: Inside Out

14

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Major components of OpenStack

Page 15: OpenStack: Inside Out

15

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Major components of OpenStack■ OpenStack is a set of component modules for various services and functions.

- Swift : Object store

● Amazon S3-like object storage

- Nova : Virtual machine life cycle management

- Glance : Virtual machine image catalog

● Actual images are stored in the backend storage, typically in Swift.

- Cinder : Virtual disk volume

● Amazon EBS-like volume management

- Keystone : Centralized authentication and service catalogue system

- Neutron : Virtual network management API (formerly known as Quantum)

● Actual network provisioning is delegated to external plugin modules.

- Horizon : Web based self-service portal

Page 16: OpenStack: Inside Out

16

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Modules work together through REST API

VM templateimages

NovaCompute

NovaCompute

Glance HorizonNova

SchedulerNeutron

Management network

DiskImages

Create virtual network

Start virtual machines

Attache virtual disk volumes(iSCSI)Authentication

service

Retrieve template images

QPID /MySQL

NetworkNode

NovaCompute

CinderKeystone

Swift

Message queueand backend RDB

Public network

Client PC

Create virtual machines

■ Modules work together through REST API calls and the message queue.

- Operations can be automated with external programs through REST API.

Page 17: OpenStack: Inside Out

17

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

API request call■ There are two cases when API requests are issued.

- When the end-user sends a request call directly or via Horizon dashboard.

- When some components send a request call to another component.

DatabaseMySQL

MessagingQPID

Infrastructure data

Message delivery to agents

Horizon(Dashboard)

Keystone(User authentication)

Neutron(Virtual network)

Cinder(Block volumes)

Nova(VM instances)

Glance(VM templates)

Downloading template images

Attaching block volumesConnecting to virtual switches

API call Web access

Page 18: OpenStack: Inside Out

18

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

User authentication for API requests■ You need to be authenticated before sending requests to APIs.

- End-users/components obtain the "token" for the API operation from Keystone before sending requests to APIs. (Each component has its user ID representing it in Keystone.)

- When obtaining the token, URL for the target API is also retrieved from Keystone. End-users need to know only the URL for Keystone API in advance.

Horizon(Dashboard)

Keystone(User authentication)

Neutron(Virtual network)

Cinder(Block volumes)

Nova(VM instances)

Glance(VM templates)

Page 19: OpenStack: Inside Out

19

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Token mechanism of Keystone authentication■ Since OpenStack clients make many API calls to various components,

authenticating with ID/password for every call is undesirable in terms of security and performance.

■ Instead, the clients obtain the "token" as a "license" for API calls in advance, and send the token ID to the component to use.

- The component receiving the request validates the token ID with Keystone before accepting the request.

- The generated token is stored in Keystone for a defined period (default: 24hous). Clients can reuse it until it expires. They don't need to obtain a token for each request call.

Client

Keystone server

ID=yyyyObtain the token(Authenticated with ID/password)

Send back the token ID

The generated tokenis stored in Keystone.

Send a requestwith the token ID

Validate the token IDand check the client's role.

Page 20: OpenStack: Inside Out

20

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Command operations of Keystone (1)■ When using standard command line tools of OpenStack, you specify user name,

password, tenant and API's URL with the environment variables.

- Keystone API has different URLs (port numbers) for admin users and general users.

- You can also specify them with command line options.

- The following is an example of keystone operation using the default admin user "admin".

# cat keystonerc_admin export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=714f1ab569a64a3bexport OS_AUTH_URL=http://172.16.1.11:35357/v2.0/export PS1='[\u@\h \W(keystone_admin)]\$ '

# . keystonerc_admin# keystone user-list+----------------------------------+------------+---------+-------------------+| id | name | enabled | email |+----------------------------------+------------+---------+-------------------+| 589a800d70534655bfade5504958afd6 | admin | True | [email protected] || 3c45a1f5a88d4c1d8fb07b51ed72cd55 | cinder | True | cinder@localhost || f23d88041e5245ee8cc8b0a5c3ec3f6c | demo_admin | True | || 44be5165fdf64bd5907d07aa1aaa5dab | demo_user | True | || cd75770810634ed3a09d92b61aacf0a7 | glance | True | glance@localhost || a38561ed906e48468cf1759918735c53 | nova | True | nova@localhost || 157c8846521846e0abdd16895dc8f024 | quantum | True | quantum@localhost |+----------------------------------+------------+---------+-------------------+

This is generatedby packstack under /root.

Port 35357 is usedfor admin users.

Page 21: OpenStack: Inside Out

21

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Command operations of Keystone (2)■ The following is an example of showing registered API services and their URLs.

- Command line tools for other components internally use this mechanism to retrieve the API of the target component.

■ Each command line tool provides the "help" sub command to show the list of sub commands and their details.

# keystone service-list+----------------------------------+----------+----------+----------------------------+| id | name | type | description |+----------------------------------+----------+----------+----------------------------+| 5ea55cbee90546d1abace7f71808ad73 | cinder | volume | Cinder Service || e92e73a765be4beca9f12f5f5d9943e0 | glance | image | Openstack Image Service || 3631d835081344eb873f1d0d5057314d | keystone | identity | OpenStack Identity Service || 8db624ad713e440492aeccac6ab70a90 | nova | compute | Openstack Compute Service || e9f02d3803ab44f1a369602010864a34 | nova_ec2 | ec2 | EC2 Service || 5889a1e691584e539aa121ab31194cca | quantum | network | Quantum Networking Service |+----------------------------------+----------+----------+----------------------------+

# keystone endpoint-list+----------------------------------+-----------+------------------------------------------||-+----------------------------------+| id | region | publicurl || | service_id |+----------------------------------+-----------+------------------------------------------||-+----------------------------------+| 0e96a30d9ce742ecb0bf123eebf84ac0 | RegionOne | http://172.16.1.11:8774/v2/%(tenant_id)s || | 8db624ad713e440492aeccac6ab70a90 || 928a38f18cc54040a0aa53bd3da99390 | RegionOne | http://172.16.1.11:9696/ || | 5889a1e691584e539aa121ab31194cca || d46cebe4806b43c4b48499285713ac7a | RegionOne | http://172.16.1.11:9292 || | e92e73a765be4beca9f12f5f5d9943e0 || ebdd4e61571945b7801554908caf5bae | RegionOne | http://172.16.1.11:8776/v1/%(tenant_id)s || | 5ea55cbee90546d1abace7f71808ad73 || ebec661dd65b4d4bb12fe67c25b2c77a | RegionOne | http://172.16.1.11:5000/v2.0 || | 3631d835081344eb873f1d0d5057314d || f569475b6d364a04837af6d6a577befe | RegionOne | http://172.16.1.11:8773/services/Cloud || | e9f02d3803ab44f1a369602010864a34 |+----------------------------------+-----------+------------------------------------------||-+----------------------------------+

# keystone help <- Showing the list of all sub commands# keystone help user-list <- Showing the detail of "user-list" sub command

Page 22: OpenStack: Inside Out

22

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Horizon(Dashboard)

Keystone(User authentication)

Neutron(Virtual network)

Cinder(Block volumes)

Nova(VM instances)

Glance(VM templates)

Template image registration with Glance (1)■ You can register new template images with Glance. The registered images become

available from Nova.

Page 23: OpenStack: Inside Out

23

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Template image registration with Glance (2)■ The following is an example of registering a new template image with the general

user "demo_user". The image is downloaded from the specified URL.

# cat keystonerc_demo_user export OS_USERNAME=demo_userexport OS_TENANT_NAME=demoexport OS_PASSWORD=passw0rdexport OS_AUTH_URL=http://172.16.1.11:5000/v2.0/export PS1='[\u@\h \W(keystone_demouser)]\$ '

# . keystonerc_demo_user# glance image-create --name "Fedora19" \ --disk-format qcow2 --container-format bare --is-public true \ --copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2

# glance image-list+--------------------------------------+----------+-------------+------------------+-----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+----------+-------------+------------------+-----------+--------+| 702d0c4e-b06c-4c15-85e5-9bb612eb6414 | Fedora19 | qcow2 | bare | 237371392 | active |+--------------------------------------+----------+-------------+------------------+-----------+--------+

Port 5000 is usedfor general users.

This file needs to becreated manually.

Page 24: OpenStack: Inside Out

24

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Keystone(User authentication)

Virtual network operations with Neutron■ Through Neutron API, end-users can create virtual network dedicated to their own

tenants.

- Details will be explained in " Configuration steps of virtual network."

# . keystonerc_demo_user# quantum net-list+--------------------------------------+-------------+-------------------------------------------------------+| id | name | subnets |+--------------------------------------+-------------+-------------------------------------------------------+| 843a1586-6082-4e9f-950f-d44daa83358c | private01 | 9888df89-a17d-4f4c-b427-f28cffe8fed2 192.168.101.0/24 || d3c763f0-ebf0-4717-b3fc-cda69bcd1957 | private02 | 23b26d98-2277-4fb5-8895-3f42cde7e1fd 192.168.102.0/24 || d8040897-44b0-46eb-9c51-149dfe351bbe | ext-network | 1b8604a4-f39d-49de-a97c-3e40117a7516 192.168.199.0/24 |+--------------------------------------+-------------+-------------------------------------------------------+

The command name "quantum" has been replaced with "neutron" in Havana release.

Horizon(Dashboard)

Neutron(Virtual network)

Cinder(Block volumes)

Nova(VM instances)

Glance(VM templates)

Page 25: OpenStack: Inside Out

25

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

VM instance creation with Nova ■ When Nova receives an instance creation request, it communicates with Glance and

Neutron through API.

- Through Glance API, it downloads the template image to the compute node.

- Through Neutron API, it attaches the launched instance to the virtual network.

Horizon(Dashboard)

Keystone(User authentication)

Neutron(Virtual network)

Cinder(Block volumes)

Nova(VM instances)

Glance(VM templates)

Downloading template images

Connecting to virtual switches

Page 26: OpenStack: Inside Out

26

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Command operations to launch an instance (1)■ The following shows how the end-user checks the necessary information before

launching an instance# . keystonerc_demo_user# nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+| 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | True | {} || 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | {} || 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | {} || 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | {} || 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | {} |+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+

# nova keypair-list+-------+-------------------------------------------------+| Name | Fingerprint |+-------+-------------------------------------------------+| mykey | 31:8c:0e:43:67:40:f6:17:a3:f8:3f:d5:73:8e:d0:30 |+-------+-------------------------------------------------+

# nova image-list+--------------------------------------+----------+--------+--------+| ID | Name | Status | Server |+--------------------------------------+----------+--------+--------+| 702d0c4e-b06c-4c15-85e5-9bb612eb6414 | Fedora19 | ACTIVE | |+--------------------------------------+----------+--------+--------+

# nova secgroup-list+---------+-------------+| Name | Description |+---------+-------------+| default | default |+---------+-------------+

# nova net-list+--------------------------------------+-------------+------+| ID | Label | CIDR |+--------------------------------------+-------------+------+| 843a1586-6082-4e9f-950f-d44daa83358c | private01 | None || d3c763f0-ebf0-4717-b3fc-cda69bcd1957 | private02 | None || d8040897-44b0-46eb-9c51-149dfe351bbe | ext-network | None |+--------------------------------------+-------------+------+

Nova retrieves the image listthrough Glance API.

Nova retrieves the network listthrough Neutron API.

Page 27: OpenStack: Inside Out

27

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Command operations to launch an instance (2)■ The following is to launch an instance using the information in the previous page.

# nova boot --flavor m1.small --image Fedora19 --key-name mykey \ --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c vm01+-----------------------------+--------------------------------------+| Property | Value |+-----------------------------+--------------------------------------+| status | BUILD || updated | 2013-11-22T06:22:52Z || OS-EXT-STS:task_state | scheduling || key_name | mykey || image | Fedora19 || hostId | || OS-EXT-STS:vm_state | building || flavor | m1.small || id | f40c9b76-3891-4a5f-a62c-87021ba277ce || security_groups | [{u'name': u'default'}] || user_id | 2e57cd295e3f4659b151dd80f3a73468 || name | vm01 || adminPass | 5sUFyKhgovV6 || tenant_id | 555b49dc8b6e4d92aa74103bfb656e70 || created | 2013-11-22T06:22:51Z || OS-DCF:diskConfig | MANUAL || metadata | {} |...snip...+-----------------------------+--------------------------------------+

# nova list+--------------------------------------+------+--------+-------------------------+| ID | Name | Status | Networks |+--------------------------------------+------+--------+-------------------------+| f40c9b76-3891-4a5f-a62c-87021ba277ce | vm01 | ACTIVE | private01=192.168.101.3 |+--------------------------------------+------+--------+-------------------------+

Page 28: OpenStack: Inside Out

28

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Command operations to launch an instance (3)■ You can specify the file with "--user-data" to use the customize script (user data).

- The following is an example of launching an instance with customize script, and adding a floating IP.

# cat hello.txt #!/bin/shecho 'Hello, World!' > /etc/motd

# nova boot --flavor m1.small --image Fedora19 --key-name mykey \ --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c \ --user-data hello.txt vm01

# nova floating-ip-list+--------------+-------------+----------+-------------+| Ip | Instance Id | Fixed Ip | Pool |+--------------+-------------+----------+-------------+| 172.16.1.101 | None | None | ext-network || 172.16.1.102 | None | None | ext-network || 172.16.1.103 | None | None | ext-network || 172.16.1.104 | None | None | ext-network || 172.16.1.105 | None | None | ext-network |+--------------+-------------+----------+-------------+

# nova add-floating-ip vm01 172.16.1.101

# ssh -i ~/mykey.pem [email protected] authenticity of host '172.16.1.101 (172.16.1.101)' can't be established.RSA key fingerprint is b7:24:54:63:1f:02:33:4f:81:a7:47:90:c1:1b:78:5a.Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.16.1.101' (RSA) to the list of known hosts.Hello, World![fedora@vm01 ~]$

Page 29: OpenStack: Inside Out

29

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Floating IP association with Neutron API■ When adding a floating IP to an instance with multiple NICs, you need to use

Neutron API to specify the NIC port to associate.

- After identifying the port ID which corresponds to the private IP, associate the floating IP to the port ID.

# nova boot --flavor m1.small --image Fedora19 --key-name mykey --security-groups default \ --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c \ --nic net-id=d3c763f0-ebf0-4717-b3fc-cda69bcd1957 \ Vm01

# nova list+--------------------------------------+------+--------+--------------------------------------------------+| ID | Name | Status | Networks |+--------------------------------------+------+--------+--------------------------------------------------+| e8d0fa19-130f-4502-acfe-132962134846 | vm01 | ACTIVE | private01=192.168.101.3; private02=192.168.102.3 |+--------------------------------------+------+--------+--------------------------------------------------+

# quantum port-list+--------------------------------------+------+-------------------+------------------------------------+| id | name | mac_address | fixed_ips |+--------------------------------------+------+-------------------+------------------------------------+| 10c3cd17-78f5-443f-952e-1e3e427e477f | | fa:16:3e:37:7b:a6 | ... "ip_address": "192.168.102.3"} || d0057651-e1e4-434c-a81d-c950b9c06333 | | fa:16:3e:e6:d9:4c | ... "ip_address": "192.168.101.3"} |+--------------------------------------+------+-------------------+------------------------------------+

# quantum floatingip-list+--------------------------------------+------------------+---------------------+---------+| id | fixed_ip_address | floating_ip_address | port_id |+--------------------------------------+------------------+---------------------+---------+| 06d24f23-c2cc-471f-a4e6-59cf00578141 | | 171.16.1.101 | || 89b49a78-8fd7-461b-8fe2-fba4a341c8a2 | | 172.16.1.102 | |+--------------------------------------+------------------+---------------------+---------+

# quantum floatingip-associate 06d24f23-c2cc-471f-a4e6-59cf00578141 d0057651-e1e4-434c-a81d-c950b9c06333

Page 30: OpenStack: Inside Out

30

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Operations for key pairs and security groups■ Security related operations such as creating/registering key pairs and defining

security groups can be done though Nova API.

- The following is to create a new key pair "key01" and save the private(secret) key in "~/.ssh/key01.pem".

- The following is to register the public key of an existing key pair as "key02".

- The following is to create a new security group "group01" and allow access to TCP port 22.

■ Note that since security group is now under the control of Neutron, you'd better know commands to configure them through quantum (neutron) API, too.

# nova keypair-add key01 > ~/.ssh/key01.pem# chmod 600 ~/.ssh/key01.pem

# nova secgroup-create group01 "My security group."# nova secgroup-add-rule group01 tcp 22 22 0.0.0.0/0

# nova keypair-add --pub-key ~/.ssh/id_rsa.pub key02

# quantum security-group-create group01 --description "My security group."# quantum security-group-rule-create --protocol tcp \ --port-range-min 22 --port-range-max 22 \ --remote-ip-prefix "0.0.0.0/0" group01

Page 31: OpenStack: Inside Out

31

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Horizon(Dashboard)

Keystone(User authentication)

Neutron(Virtual network)

Cinder(Block volumes)

Nova(VM instances)

Glance(VM templates)

Block volume creation with Cinder■ Block volumes can be created/deleted/snapshot-ed through Cinder API.

- When attaching/detaching block volumes to/from running instances, you need to send a request to Nova API. Then Nova works together with Cinder through API calls.

Attaching block volumes

Page 32: OpenStack: Inside Out

32

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Command operations for block volumes■ The following is an example of creating a 5GB block volume and attaching/detaching

to/from a running instance.

# cinder create --display-name volume01 5

# cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 78b4d23b-3b57-4a38-9f6e-10e5048170ef | available | volume01 | 5 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

# nova volume-attach vm01 78b4d23b-3b57-4a38-9f6e-10e5048170ef auto+----------+--------------------------------------+| Property | Value |+----------+--------------------------------------+| device | /dev/vdb || serverId | f40c9b76-3891-4a5f-a62c-87021ba277ce || id | 78b4d23b-3b57-4a38-9f6e-10e5048170ef || volumeId | 78b4d23b-3b57-4a38-9f6e-10e5048170ef |+----------+--------------------------------------+

# nova volume-detach vm01 78b4d23b-3b57-4a38-9f6e-10e5048170ef

The device name seen from guest OS.

Page 33: OpenStack: Inside Out

33

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Creating bootable volumes■ You can create a bootable block volume by creating a new volume from a template

image.

- Using the bootable volume, you can boot an instance directly from the block volume.

- The following is an example of creating a bootable volume from an existing template image and launching an instance with it. ("--image" option is ignored in the boot subcommand, but you need specify one as a dummy entry.)

# cinder create --image-id 702d0c4e-b06c-4c15-85e5-9bb612eb6414 --display-name Fedora19-bootvol 5

# cinder list+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+| 78b4d23b-3b57-4a38-9f6e-10e5048170ef | available | volume01 | 5 | None | false | || bdde9405-8be7-48d5-a879-35e37c97512f | available | Fedora19-bootvol | 5 | None | true | |+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+

# nova boot --flavor m1.small --image Fedora19 --key-name mykey \ --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c \ --block_device_mapping vda=bdde9405-8be7-48d5-a879-35e37c97512f:::0 vm02

# nova volume-list+----------||-----------+-----------+------------------+------+-------------+--------------------------------------+| ID || | Status | Display Name | Size | Volume Type | Attached to |+----------||-----------+-----------+------------------+------+-------------+--------------------------------------+| 78b4d23b-||e5048170ef | available | volume01 | 5 | None | || bdde9405-||e37c97512f | in-use | Fedora19-bootvol | 5 | None | b4cb7edd-317f-44e9-97db-5a04c41a4510 |+----------||-----------+-----------+------------------+------+-------------+--------------------------------------+

Template image ID

Block volume ID Flag to delete the volume afterdestroying instance. (1=yes)

Page 34: OpenStack: Inside Out

34

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal services of Nova and Cinder

Page 35: OpenStack: Inside Out

35

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal services of Nova

/var/lib/nova/instances/_base

qcow2base image

qcow2overlay image

qcow2overlay image

VMinstance

/var/lib/nova/instances/<ID>

Overlaying

VMinstance

Download template image

Glance

Launch VM

Compute node

Nova API

Order tolaunch VM

Compute DriverChoose compute nodeto launch VM

Nova Conductor

Retrieve resource information

Nova Compute

Driver for a specifichypervisor to be used

Database

Proxy servicefor DB access

Update resource information

Controller node Provide REST API

Downloaded image is cachedfor a defined period.

Nova Scheduler

Communication via the messaging server

Libvirt

Page 36: OpenStack: Inside Out

36

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

How messaging server works■ The internal services and agents of one component (such as Nova) communicate

through the messaging server.

- The messaging server provides "topics" as channels of communication. The sender put a message in a specific topic. Then the receiver picks the message from topics which they have subscribed.

- The messages in topics have a flag to specify the delivery model such as "all subscribers should receive" or "only one subscriber should receive."

- Since multiple senders can put messages in the same topic, it realizes the M:N asynchronous communication.

Messaging server

Topic A

Sending messages

Receiving messages

service service

service

service

Services which havesubscribed to topic A.

Topic B

・・

Page 37: OpenStack: Inside Out

37

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Features of qcow2 disk image■ qcow2 is a disk image format designed for virtual machines which has the

following features.

■ Dynamic block allocation

- The real (physical) file size is smaller than its logical image size. The file grows as data is added. It's possible to extend the logical size, too.

■ Overlay mechanism

- You can add an overlay file on top of the backing image. The overlay file contains only the additional changes from the backing image.

- The backing image can be shared with multiple overlay files. This is useful to reduce the physical disk usage when a lot of virtual machines is launched with the same template image.

■ Multiple snapshots

- By taking snapshots of the image, you can reproduce the previous contents of the image, or create a new image from the snapshot.

Page 38: OpenStack: Inside Out

38

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

# qemu-img create -f qcow2 baseimage.qcow2 5GFormatting 'baseimage.qcow2', fmt=qcow2 size=5368709120 encryption=off cluster_size=65536 lazy_refcounts=off

# qemu-img create -f qcow2 -b baseimage.qcow2 layerimage.qcow2Formatting 'layerimage.qcow2', fmt=qcow2 size=5368709120 backing_file='baseimage.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off

# qemu-img info layerimage.qcow2 image: layerimage.qcow2file format: qcow2virtual size: 5.0G (5368709120 bytes)disk size: 196Kcluster_size: 65536backing file: baseimage.qcow2

# qemu-img snapshot -c snap01 layerimage.qcow2 # qemu-img snapshot -l layerimage.qcow2 Snapshot list:ID TAG VM SIZE DATE VM CLOCK1 snap01 0 2013-11-22 17:08:02 00:00:00.000

# qemu-img convert -f qcow2 -O qcow2 -s snap01 layerimage.qcow2 copiedimage.qcow2

Operations on qcow2 disk image■ qemu-img is a command tool to manipulate qcow2 images.

Reference: https://access.redhat.com/site/documentation/ja-JP/Red_Hat_Enterprise_Linux/6/html-single/   Virtualization_Administration_Guide/index.html#sect-Virtualization-Tips_and_tricks-Using_qemu_img

Creating a image with 5GB logical size.

Creating a overlay file withbaseimg.qcow2 as a backing image.

Creating a snapshot.

Creating a new imagefrom a snapshot.

Page 39: OpenStack: Inside Out

39

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Public key injection mechanism■ Nova Compute injects the public key into "/root/.ssh/authorized_keys" of the local

disk image before launching the instance.

■ Cloud-Init can also be used to setup public key authentication at the boot time as it can retrieve the public key through meta-data(*).

- Because allowing root login is undesirable in many cases, you'd better configure Cloud-Init to create a general user and setup public key authentication for it.

(*) Especially, when booting from block volume, Nova Compute fails to inject the public key. Use of Cloud-Int is mandatroy in this case.

$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-keyssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA5W2IynhVezp+DpN11xdsY/8NOqeF8r7eYqVteeWZSBfnYhKn8D85JmByBQnJ7HrJIrdMvfTYwWxi+swfFlryG3A+oSll0tT71FLAWnAYz26ML3HccyJ7E2bD66BSditbDITKH3V66oN9c3rIEXZYQ3A+GEiA1cFD++R0FNKxyBOkjduycvksB5Nl9xb3k6z4uoZ7JQD5J14qnooM55Blmn2CC2/2KlapxMi0tgSdkdfnSSxbYvlBztGiF3M4ey7kyuWwhE2iPBwkV/OhANl3nwHidcNdBrAGC3u78aTtUEwZtNUqrevVKM/yUfRRyPRNivuGOkvjTDUL/9BGquBX9Q== enakai@kakinoha

Retrieving the public key from meta-data.

Page 40: OpenStack: Inside Out

40

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Block volume use cases and corresponding APIs

User Data

(3) Create a snapshot

(4) Create a new block volumefrom the snapshot.

(2) Attach to a running instanceto store user data.

User Data

It can be re-attached toanother instance.

(1) Create a new block volume.

OS OS

OS

Create a block volumefrom a template image.

OSTemplate

image

■ Cinder API- volume create/delete/list/show

(create from snapshot, image)

- snapshot create/delete/list/show

■ Nova API- volume attach/detach

Page 41: OpenStack: Inside Out

41

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

How Nova and Cinder works together

Nova Compute

CinderVM instance

/dev/vdb Virtual disk

Linux KVM

/dev/sdX iSCSI LUN

Storage box

Create LUNs

iSCSI SWInitiator

iSCSI Target

■ In typical configuration, block volumes are created as LUNs in iSCSI storage boxes. Cinder operates on the management interface of the storage through the corresponding driver.

■ Nova Compute attaches it to the host Linux using the software initiator, then it's attached to the VM instance through KVM hypervisor.

Page 42: OpenStack: Inside Out

42

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal services of Cinder■ Volume drivers handle the management interface of corresponding storage.

- When using multiple types of storage, Cinder Scheduler choose the driver to be used based on the requested storage type.

Cinder API

Controller node Provide REST API

Cinder-Volume

Cinder Scheduler

Volume Driver

Driver for a specifictype of storage

Choose an appropriatevolume driver

LUN

iSCSI connection

Nova Compute

Create LUNs

Nova APIProvide REST API

Database

Volume information

Storage box

Communication via the messaging server

Page 43: OpenStack: Inside Out

43

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Nova ComputeVM instance

/dev/vdb

Linux KVM

/dev/sdX iSCSI LUN

Using LVM driver■ Cinder provides the LVM driver as a reference implementation which uses Linux

LVM instead of external storage boxes.

- Snapshot feature is implemented with LVM snapshot where the delta volume has the same size as the base volume.

Cinder

iSCSI SWTarget (tgtd)

VG: cinder-volumes

LV

Create logical volumes andexport as iSCSI LUNs.

iSCSI SWInitiator

Virtual disk

Page 44: OpenStack: Inside Out

44

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Using NFS driver■ Cinder also provides the NFS driver which uses NFS server as a storage backend.

- The driver simply mounts the NFS exported directly and create disk image files in it. Compute nodes use NFS mount to access the image files.

Virtual disk

NFS server

NFS mount

・・・

NFS mount

・・・

Nova ComputeVM instance

/dev/vdb

Linux KVM

Cinder

Page 45: OpenStack: Inside Out

45

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Using GlusterFS driver■ There is a driver for GlusterFS distributed filesystem, too.

- Currently it uses FUSE mount mechanism. This will be replaced with more optimized mechanism (libgfapi) which bypasses the FUSE layer.

Cinder

GlusterFS cluster

FUSE mount

FUSE mount

・・・

Virtual disk

・・・

Nova ComputeVM instance

/dev/vdb

Linux KVM

Page 46: OpenStack: Inside Out

46

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Architecture overview of Neutron

Page 47: OpenStack: Inside Out

47

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Logical view of Neutron's virtual network■ Each tenant has its own virtual router which works like "the broadband router in

your home network."

- Tenant users add virtual switches behind the router and assign private subnet addresses to them. It's possible to use overlapping subnets with other tenants.

■ When launching an instance, the end-user selects virtual switches to connect it.

- The number of virtual NICs of the instance corresponds to the number of switches to connect. Private IPs are assigned via DHCP.

Virtual switch192.168.101.0/24

Virtual routerfor tenant A

External network

Virtual routerfor tenant B

Virtual switch192.168.102.0/24

Page 48: OpenStack: Inside Out

48

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Plugin architecture of Neutron■ The actual work of creating virtual network is done by plugin agents.

- There are various plugins for Neutron including commercial products from third vendors.

- OpenStack provides "LinuxBrdige plugin" and "Open vSwitch plugin" as a standard/reference implementation.

Compute node

Neutron service

L2 Agent

Controller node Provide REST API

Network controller

L3 Agent

DHCP Agent

L2 Agent

Create virtual routers

Create virtual L2 switches

Assign private IP addresses

Create virtual L2 switches

Communication via the messaging server

Page 49: OpenStack: Inside Out

49

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Network configuration with standard plugin■ The following shows the typical configuration using LinuxBridge plugin or Open

vSwitch plugin.

- L3 Agent on the network node provides the virtual router functions connecting the private and public network. ("eth0" of each node is used for accessing host Linux, not for VM instance communication.)

- It's not possible to have multiple network nodes. Scalable network feature is under development today.

・・・

Private network

eth0 eth1 eth2

Network node

eth0 eth1

Compute node

Public network

L3 Agent

L2 Agent

DHCP AgentVM

L2 Agent

VM

eth0 eth1

VM

L2 Agent

VM

Provide virtualrouter function

仮想スイッチ作成Create virtualL2 switchesProvide DHCP function

for private networks

Page 50: OpenStack: Inside Out

50

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal architecture of LinuxBridge plugin

Page 51: OpenStack: Inside Out

51

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal architecture of LinuxBridge plugin■ This section describes how LinuxBridge plugin implements the virtual network in

the drawing below as a concrete example.

Virtual router

Virtual L2 switchprivate01

vm01 vm02 vm03

External network

Virtual L2 switchprivate02

Page 52: OpenStack: Inside Out

52

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration inside compute node■ Linux bridges are created for each virtual switch. Outside the compute node, the

network traffic of each switch is separated with VLAN.

brqxxxx

eth1.101 eth1.102

brqyyy

eth1

private01 private02

vm01

eth0IP

VLAN101

VLAN102

Physical L2 switchfor private network

vm02

eth0IP

eth1IP

vm03

eth0IP

Configured by L2 Agent

Configured by Nova Compute

IP is assigned fromdnsmasq on network node.

VLANs are createdfor each virtual L2 switch.

Page 53: OpenStack: Inside Out

53

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration inside network node

qr-YYYIP

qr-WWWIP

To/From private network

To/From public network

eth1

brqxxxx

brqxxxx

eth1.101 eth1.102

brqyyy

eth2

private01 private02

Configured by L2 Agent

ns-XXX

dnsmasq

ns-ZZZ

dnsmasq

qg-VVVIPExternal GW IP

Internal GW IP

NAT and filtering isdone by iptables.

Configured by L3 AgentConceptually, there exists

a virtual router here.

■ Virtual router is implemented with Linux's packet forwarding feature.

■ dnsmasq is used as a DHCP server for providing private IP addresses for each subnet.

- IP address is assigned corresponding to a MAC addresses of virtual NIC.

IP IP

Configured by DHCP Agent

dnsmasq is startedfor each subnet.

Page 54: OpenStack: Inside Out

54

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal architecture of Open vSwitch plugin

Page 55: OpenStack: Inside Out

55

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

What is Open vSwitch?■ Open vSwitch is software to create virtual L2 switch on top of Linux. It supports

many features comparable to physical L2 switch products.

- Especially, since it supports the OpenFlow protocol which provides a fine-grained packet control feature, Open vSwitch is widely used for virtual network applications.

● Visibility into inter-VM communication via NetFlow, sFlow(R), IPFIX, SPAN, RSPAN, and GRE-tunneled mirrors

● LACP (IEEE 802.1AX-2008)● Standard 802.1Q VLAN model with trunking● BFD and 802.1ag link monitoring● STP (IEEE 802.1D-1998)● Fine-grained QoS control● Support for HFSC qdisc● Per VM interface traffic policing● NIC bonding with source-MAC load balancing, active backup, and L4 hashing● OpenFlow protocol support (including many extensions for virtualization)● IPv6 support● Multiple tunneling protocols (GRE, VXLAN, IPsec, GRE and VXLAN over IPsec)● Remote configuration protocol with C and Python bindings● Kernel and user-space forwarding engine options● Multi-table forwarding pipeline with flow-caching engine● Forwarding layer abstraction to ease porting to new software and hardware platforms

Supported features of Open vSwitch(http://openvswitch.org/features/)

Page 56: OpenStack: Inside Out

56

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

What is OpenFlow?■ OpenFlow is a protocol to provide fine-grained control of packet forwarding from

an external controller.

- OpenFlow switches query the external controller about how received packets should be handled.

- Since the programmability of controller software gives flexibility over packet operations, it suits to creating multi-tenant virtual network. For example, it can decide the forwarding port according to source/destination MAC addresses, modify VLAN tag in the header, etc.

OpenFlow controller

OpenFlow switches

Controller instructs how packets shouldbe handled through OpenFlow protocol.

Page 57: OpenStack: Inside Out

57

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Internal architecture of Open vSwitch plugin■ This section describes how Open vSwitch plugin implements the virtual network in

the drawing below as a concrete example.

vm01 vm02 vm03 vm04

Tenant BVirtual router

Tenant AVirtual router

External network

Virtual L2 switchprojectA

Virtual L2 switchproject B

Page 58: OpenStack: Inside Out

58

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration inside compute node (1)■ See the next page for explanation.

br-priv

vm01

eth0

br-int

eth1

IP

phy-br-priv

int-br-priv

qvoXXX

vm02

eth0IP

qvoYYY

Port VLAN tag:1

qvoZZZ qvoWWW

Port VLAN tag:2

vm04

eth0IP

VLAN101

VLAN102Open vSwitch

vm03

eth0IP

Configured by Nova Compute

Configured by L2 Agent

Translation between"Internal" and "External" VLAN

- Internal VLAN1<->External VLAN101- Internal VLAN2<->ExternalVLAN102

"Internal VLAN" is assignedto each virtual L2 switch.

Page 59: OpenStack: Inside Out

59

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration inside compute node (2)■ Virtual NICs of VM instances are connected to the common "Integration switch (br-

int)".

- Internal VLAN is assigned to the connected port according to the (logical) virtual L2 switch to be connected.

■ Connection to the physical L2 switch for the private network is done through the "Private switch (br-priv)".

- External VLANs are assigned on the physical switch according to the (logical) virtual L2 switch. The translation between Internal and External VLAN is done with OpenFlow.

■ In addition to VLAN, other separation mechanisms such as GRE tunneling can be used over the physical network connection.

- In the case of GRE tunneling, the translation between "Internal VLAN" and "GRE tunnel ID" is done with OpenFlow.

Page 60: OpenStack: Inside Out

60

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

dnsmasq

eth2

br-int

br-priv

phy-br-priv

int-br-priv

tapXXX qr-YYY

ポートVLAN tag:1

IP IP

qg-CCC

br-ex

eth1

qr-BBBIP

dnsmasq

tapAAAIP

IP

ポートVLAN tag:2

Configuration inside network node

To/From private network

To/From public network

Configured by L2 AgentTranslation between"Internal" and "External" VLAN

- Internal VLAN1<->External VLAN101- Internal VLAN2<->ExternalVLAN102

qg-VVVIP

Configured by L3 Agent

■ Since two virtual routers are configured, there are two paths of packet forwarding.

Configured by DHCP Agent

NAT and filtering isdone by iptables.

Page 61: OpenStack: Inside Out

61

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Overlapping subnet with network namespace■ When using multiple virtual routers, network node needs to have independent

NAT/filtering configurations for each virtual router to allow the use of overlapping subnet among multiple tenants. This is done with Linux's network namespace feature which allows Linux to have multiple independent network configurations.

■ The following is the steps to use network namespace.

- Create a new namespacne.

- Allocate network ports inside the namespace. (Both physical and logical ports can be used.)

- Configure networks (port configuration, iptalbes configuration, etc.) inside the namespace.

- Then the configuration is applied to network packets which go through the network port inside this namespace.

■ L3 Agent of LinuxBridge / Open vSwitch plugin uses network namespace.

- It can be configured not to use namespace, but the use of overlapping subnet should be disabled in this case.

Page 62: OpenStack: Inside Out

62

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

The overall picture of Open vSwitch plugin (1)■ See the next page for details.

Virtual router's GW IP on external network side.

eth2

br-ex

eth1

iptablesでNAT接続

Virtual router's GW IP on private network side.

Open vSwitch

eth1

VM1 VM2

br-int br-int

VLAN ID mapping forvirtual L2 switches

is done with OpenFlowVLAN Trunk

Compute node

Network node

dnsmasq dnsmasq

br-priv

External network

br-priv

Networknamespace

VLAN ID mapping forvirtual L2 switches

is done with OpenFlow

Page 63: OpenStack: Inside Out

63

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

The overall picture of Open vSwitch plugin (2)■ While an end-user defines the virtual network components such as virtual L2

switches and virtual routers, the agents work in the following way.

- When a virtual L2 switch is defined, L2 Agent configures the VLAN ID mapping on "br-int" and "br-priv" so that compute nodes are connected each other via VLAN. At the same time, DHCP Agent starts a new dnsmasq which provides the DHCP function to the corresponding VLAN.

- When a virtual router is defined and connected to the external network, L3 Agent creates a port on "br-ex" which works as an external gateway of the virtual router.

- When a virtual L2 switch is connected to the virtual router, L3 Agent creates a port on "br-ex" which works as an internal gateway of the virtual router. It also configures iptables to start NAT connection between public and private networks.

■ In addition to the agents which have already been explained, there exists "Metadata Proxy Agent" which helps the metadata mechanism to work.

- iptalbes on network node is configured so that packets to "169.254.169.254:80" are redirected to Metadata Proxy Agent. This agent determines the instance which sent the packet from the source IP address, and send back the corresponding message including the requested metadata.

Page 64: OpenStack: Inside Out

64

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Packet redirection to Metadata Proxy Agent■ The following commands show iptables configuration within the namespace which

contains the virtual router. There is a redirection entry where packets to "169.254.169.254:80" are redirected to Metadata Proxy Agent on the same node.

■ Note that "NOZEROCONF=yes" should be set in "/etc/sysconfig/network" of guest OS when using the metadata mechanism.

- Without it, packets to "169.254.0.0/16" are not routed to outside the guest OS due to APIPA specification.

# ip netns listqrouter-b35f6433-c3e7-489a-b505-c3be5606a643qdhcp-1a4f4b41-3fbb-48a6-bb12-9621077a4f92qrouter-86654720-d4ff-41eb-89db-aaabd4b13a35qdhcp-f8422fc9-dbf8-4606-b798-af10bb389708

# ip netns exec qrouter-b35f6433-c3e7-489a-b505-c3be5606a643 iptables -t nat -L...Chain quantum-l3-agent-PREROUTING (1 references)target prot opt source destination REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 9697...# ps -ef | grep 9697root 63055 1 0 7月09 ? 00:00:00 python /bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/b35f6433-c3e7-489a-b505-c3be5606a643.pid --router_id=b35f6433-c3e7-489a-b505-c3be5606a643 --state_path=/var/lib/quantum --metadata_port=9697 --verbose --log-file=quantum-ns-metadata-proxyb35f6433-c3e7-489a-b505-c3be5606a643.log --log-dir=/var/log/quantum

Nemespace containingthe virtual router

Page 65: OpenStack: Inside Out

65

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration steps of virtual network

Page 66: OpenStack: Inside Out

66

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration steps of virtual network (1)■ The following is the steps for configuring virtual network with quantum command.

- We use the following environment variables as parameters specific to each setup.

- Define an external network "ext-network".

● Since the external network is shared by multiple tenants, the owner tenant (--tenant-id) is "services" (a general tenant for shared services), and "--shared" option is added.

● As we suppose there's no VLANs in the external network, network_type is "flat". ● In the plugin configuration file (plugin.ini), Open vSwitch for the external network

connection (br-ex) has an alias "physnet1" which is specified as physical_network here.● "--router:external=True" is specified to allow to be a default gateway of virtual routers.

tenant=$(keystone tenant-list | awk '/ services / {print $2}')quantum net-create \ --tenant-id $tenant ext-network --shared \ --provider:network_type flat --provider:physical_network physnet1 \ --router:external=True

public="192.168.199.0/24"gateway="192.168.199.1"nameserver="192.168.199.1"pool=("192.168.199.100" "192.168.199.199")

Page 67: OpenStack: Inside Out

67

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration steps of virtual network (2)- Define a subnet of the external network.

● "--allocation-pool" specifies the IP address pool (the range of IP addresses which can be used by OpenStack as router ports and floating IP, etc.)

- Define a virtual router "demo_router" for the tenant "demo", and attach it to the external network.

● The owner tenant (--tenant-id) is "demo".

tenant=$(keystone tenant-list|awk '/ demo / {print $2}')quantum router-create --tenant-id $tenant demo_routerquantum router-gateway-set demo_router ext-network

quantum subnet-create \ --tenant-id $tenant --gateway ${gateway} --disable-dhcp \ --allocation-pool start=${pool[0]},end=${pool[1]} \ ext-network ${public}

bridge_mappings=physnet1:br-ex,physnet2:br-privtenant_network_type=vlannetwork_vlan_ranges=physnet1,physnet2:100:199

Alias setting for Open vSwitch in plugin configuration file (/etc/quantum/plugin.ini).

Mapping between alias andactual Open vSwitch name

VLAN ID range for each Open vSwitch.(VLAN is not used for physnet1.)

Page 68: OpenStack: Inside Out

68

Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Configuration steps of virtual network (3)- Define a virtual L2 siwtch "private01".

● Since VLAN is used as a separation mechanism of private networks, "vlan" is specified for network_type . VLAN ID is specified with segmentation_id.

● In the plugin configuration file (plugin.ini), Open vSwitch for the private network connection (br-priv) has an alias "physnet2" which is specified as physical_network here.

- Define a subnet of "private01", and connect it to the virtual router.

● "192.168.1.101/24" is specified for the subnet as an example here.

quantum net-create \ --tenant-id $tenant private01 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 101

quantum subnet-create \ --tenant-id $tenant --name private01-subnet \ --dns-nameserver ${nameserver} private01 192.168.1.101/24quantum router-interface-add demo_router private01-subnet

Page 69: OpenStack: Inside Out

69

Copyright (C) 2014 National Institute of Informatics, All rights reserved.