3129 slides

Download 3129 Slides

If you can't read please download the document

Upload: cuong-tran

Post on 05-Feb-2016

20 views

Category:

Documents


0 download

DESCRIPTION

suse

TRANSCRIPT

SUSE Cloud 4 Administration - Manual

SUSE Cloud 4 Administration
Section 1: Understand the SUSE Cloud 4 Architecture

Notes:

Objectives

Understand the Components of SUSE Cloud 4

Understand the Role of Admin, Controller, and Compute Nodes

Understand the Components of
SUSE Cloud 4

Notes:

Cloud Computing: What is it?

Cloud Computing:

The IT Buzzword for some years

While a pretty nebulous concept in the beginning, the concept of Cloud Computing became much better defined over the past years

A useful and comprehensive definition was published by NIST (National Institute of Standards and Technology in the USA)

Notes:

Cloud Computing: NIST Definition

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf

Notes:

Cloud Computing is Not Just Virtualization

On-demand access to a shared pool of computing resources or services that can be rapidly provisioned and released.

Essential Characteristics, according to NIST:

On-demand self-service

Broad network access

Resource pooling

Rapid elasticity

Measured resource usage

Source: http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf

NIST Definition:
Service Models

Software as a Service (SaaS)

Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)

This is what SUSE Cloud provides

Notes:

NIST Definition:
Deployment Models

Private Cloud

Community Cloud

Public Cloud

Hybrid Cloud

Notes:

Promise of Private Cloud Computing for Enterprises

Lower costs

Reduce upfront capital expense

Automation to reduce ongoing administration costs

Increased agility

Dynamic configuration of IT resources

Respond quickly to business demands

Self-service provisioning

Greater control and security

Data remains inside the firewall

Standard enterprise security

By deploying a private cloud you can meet the needs of the business, while lowering costs. But, you can achieve this without losing control of your data, or opening your enterprise up to security risks.

As with all technology innovation, Private Cloud lowers costs. Through automation and orchestration Private Cloud promises to provide improved server utilization rates. Automation when combined with the self-service features of cloud computing that enable private cloud users to provision and configure servers themselves will allow your IT department to become more efficient and free-up staff for other higher value activities even as you scale out your cloud environment.

But, the real benefit of the private cloud is to be able to respond to the needs of the business more rapidly using your enterprise's infrastructure, security measures, and compliance protocols.

Why Cloud?

Pros:

FlexibilityAdministrators set up the basic infrastructure, users provision the resources they require when they need it

Speed of provisioningUsers set up the specific virtual machines they need when they need them

Better uptime / availability of services

Cons:

VM sprawl

Worry in IT department to loose control over resources

VM sprawl: Because it is so easy to create a VM, more might be created than are actually needed. They may use resources unnecessarily and may create licensing costs. They might remain running despite the fact that they are no longer needed.

Why Would You Want to Run Your Own Cloud?

Compliance concerns

Regulations can mandate that you know where your data is physically located - with public clouds you often don't know where your data is actually stored

Privacy laws

Different countries have different privacy laws and with public cloud providers located in other countries you may not be able to comply with the laws of your own country

There are cloud providers that allow you to rent computing resources according to your needs (public cloud), but to some extent you loose control over your data.

Why Would You Want to Run Your Own Cloud?

Security concerns

You might want to tightly control who has access to your hardware and to your data

Better use of existing hardware

Your own cloud might allow you to utilize your own hardware more effectively or more flexibly

Notes:

Why SUSE Cloud?

Easy and fast setup

Sophisticated tools to set up and run a cloud infrastructure that integrates the various software components

Support

Notes:

SUSE Cloud

SUSE Cloud is an open source software solution based on the OpenStack and Crowbar projects that provides the fundamental capabilities for enterprises to deploy an Infrastructure-as-a-Service Private Cloud

End Users

Self Service Portal

Image Repository

APIs

AutomatedConfiguration

Optimized
Deployment

APIs

Pool of Virtualized Servers
(Compute Storage Nodes)

SUSE Cloud is our OpenStack distribution and uses Crowbar as part of the install framework to help simplify the installation and ongoing management of your private cloud environment.

SUSE Cloud delivers on the promise of a private cloud by allowing you to deploy a secure, compliant fully supported cloud environment within your firewalls.

By using SUSE Cloud, IT and the business can work more closely together. They can react quickly to changing business demands and reduce costs by automating the creation of virtual servers and deploying workloads on them.

Ultimately, SUSE Cloud will allow you to be the cloud service provider of choice for your enterprise.

SUSE Cloud 4

Billling

VM MgmtImage Tool

Portal

App MonitorSec & PerfCloud

Management

Orchestration(Heat)Dashboard(Horizon)Cloud APIs(OpenStack andEC2)Required
ServicesMessage Q
DatabaseAUTH(Keystone)Images(Glance)HypervisorXen, KVMVmware, HyperV

Compute(Nova)Operating SystemPhysical Infrastructure: x86-64, Switches, Storage OpenStack Icehouse

Management Tools

OS and Hypervisor

Object(Swift)Network(Neutron)Adapters Block(Cinder)Adapters Telemetry(Ceilometer)

Install Framework
Physical InfrastructureHypervisor

Ceph (tech preview)

RadosRBDRadosGW

Notes:

SUSE Cloud 4

Billling

VM MgmtImage Tool

Portal

App MonitorSec & PerfCloud

Management

Orchestration(Heat)Dashboard(Horizon)Cloud APIs(OpenStack andEC2)Required
ServicesMessage Q
DatabaseAUTH(Keystone)Images(Glance)HypervisorXen, KVMVmware, HyperV

Compute(Nova)Operating SystemPhysical Infrastructure: x86-64, Switches, Storage OpenStack Icehouse

Management ToolsOS and Hypervisor

Object(Swift)Network(Neutron)Adapters Block(Cinder)Adapters Telemetry(Ceilometer)

Install Framework
Physical InfrastructureHypervisor

Ceph (tech preview)

RadosRBDRadosGW

SUSE Cloud AddsRequired
ServicesRabbitMQ
Postgresql

Install Framework
(Crowbar, Chef, TFTP, DNS, DHCP)Notes:

SUSE Cloud 4

Billling

VM MgmtImage Tool

Portal

App MonitorSec & PerfCloud

Management

Orchestration(Heat)Dashboard(Horizon)Cloud APIs(OpenStack andEC2)Required
ServicesMessage Q
DatabaseAUTH(Keystone)Images(Glance)HypervisorXen, KVMVmware, HyperV

Compute(Nova)Operating SystemPhysical Infrastructure: x86-64, Switches, Storage OpenStack Icehouse

Management ToolsOS and Hypervisor

Object(Swift)Network(Neutron)Adapters Block(Cinder)Adapters Telemetry(Ceilometer)

Install Framework
Physical InfrastructureHypervisor

Ceph (tech preview)

RadosRBDRadosGW

SUSE Cloud AddsRequired
ServicesRabbitMQ
Postgresql

Install Framework
(Crowbar, Chef, TFTP, DNS, DHCP)SUSEManagerSUSEStudio

HypervisorXen, KVM

SUSE Linux Enterprise Server 11 SP3SUSE Product

Notes:

SUSE Cloud 4

Billling

VM MgmtImage Tool

Portal

App MonitorSec & PerfCloud

Management

Orchestration(Heat)Dashboard(Horizon)Cloud APIs(OpenStack andEC2)Required
ServicesMessage Q
DatabaseAUTH(Keystone)Images(Glance)HypervisorXen, KVMVmware, HyperV

Compute(Nova)Operating SystemPhysical Infrastructure: x86-64, Switches, Storage OpenStack Icehouse

Management ToolsOS and Hypervisor

Object(Swift)Network(Neutron)Adapters Block(Cinder)Adapters Telemetry(Ceilometer)

Install Framework
Physical InfrastructureHypervisor

Ceph (tech preview)

RadosRBDRadosGW

SUSE Cloud AddsRequired
ServicesRabbitMQ
Postgresql

Install Framework
(Crowbar, Chef, TFTP, DNS, DHCP)SUSEManagerSUSEStudio

HypervisorXen, KVM

SUSE Linux Enterprise Server 11 SP3SUSE Product

Physical Infrastructure: x86-64, Switches, Storage Billing

Portal

App MonitorSec & PerfAdapters Adapters VMware, HyperVPartner Solutions

Notes:

OpenStack

Key component of SUSE Cloud

OpenStack founded by Rackspace Hosting and NASA in the year 2010

Evolved into a global software community of developers collaborating on a standard and massively scalable open source cloud operating system.

More than 180 Companies involved

Notes:

OpenStack

Very active Community

OpenStack Summit every 6 months

OpenStack is freely available under the Apache 2.0 license

Framework of various components that communicate with each other through various APIs

It is pretty complex!

SUSE Cloud helps you to deal with that complexity

Notes:

OpenStack

Compute (Nova) manage images

Images (Glance) provides images

Object Storage (Swift) key:value stores

Identity (Keystone) security, groups, users

Dashboard (Horizon) WebUI

Network (Neutron) manages networks (IP, VLAN, firewall)

Block Storage (Cinder) volumes

Telemetry (Ceilometer) metrics, measurement

Orchestration (Heat) aggregates instances

Notes:

OpenStack Conceptual Architecture

Source of this illustration:http://docs.openstack.org/training-guides/content/associate-getting-started.html

From the same source: The conceptual diagram is a stylized and simplified view of the architecture. It assumes that the implementer uses all services in the most common configuration. It also shows only the operator side of the cloud; it does not show how consumers might use the cloud. For example, many users directly and heavily access object storage.

OpenStack Logical Architecture

Apropos complexity Source of this illustration: http://docs.openstack.org/training-guides/content/developer-getting-started.html

OpenStack Logical Architecture

Source of this illustration: http://docs.openstack.org/training-guides/content/developer-getting-started.html

SUSE Cloud Software Components

SUSE Cloud is based on SUSE Linux Enterprise Server, OpenStack, Crowbar, and Chef

OpenStack, the cloud management layer, works as the "Cloud Operating System"

Crowbar is used to automatically deploy the OpenStack nodes from a central Administration Server

Chef is used to configure the OpenStack nodes from a central Administration Server

SUSE Cloud integrates OpenStack and other components into a comprehensive whole that makes creating and running your own cloud environment (relatively) easy

Understand the Role of Admin, Controller, and Compute Nodes

Notes:

Types of Nodes

SUSE Cloud is deployed to four different types of machines (nodes):

One Administration Server for node deployment and management

One or more Control Nodes hosting the cloud management services

Several Compute Nodes on which the instances are started

Several Storage Nodes for block and object storage

SUSE Linux Enterprise Server is used as the underlying operating system for all cloud infrastructure machines

Compute Nodes running Microsoft Hyper-V Server or Windows Server 2012 require to configure the Administration Server to be able to provide the netboot environment for node installation.

Note: The Web UI for the Admin node is different from the Web UI of the Cloud (Horizon), see next slide.

Types of Nodes

The minimum number of machines for SUSE Cloud 4 is three: One Admin node, one Control node, one Compute node. Further control, compute, and storage nodes can be added as needed.

The graphic above lists the services running on the different kind of nodes.

SUSE Cloud Structure

Admin Node

SLES

Chef server

Crowbar

Software mirror

TFTP

PXE Server

Compute /
Storage Node

SLES

Xen or KVM

nova-compute

Swift-proxy

SLES

Database

Message queue

Nova- scheduler

Dashboard

Keystone

Image Repo

Control Node

Crowbar+ PXE Boot

Cloud ControlNotes:

SUSE Cloud Admin Server

The Admin Server (Crowbar) is the first machine you install:You install SLES 11 SP3

You install SUSE Cloud as an add-on product

You create a network installation repository

You bring your installation up-to-date using an SMT repository

Admin Node

Compute /
Storage Node

Control Node

Crowbar+ PXE BootStep 1:Install Admin Node

Notes:

SUSE Cloud
Control and Compute Nodes

The Control and Compute nodes come next

The basic installation is the same for both and is done with PXE booting and automated installation via the Network from the Admin node

Control nodes differ from the compute nodes only by the services that run on them

Admin Node

Compute /
Storage Node

Control Node

Crowbar+ PXE BootStep 2:
Install the other nodes

Step 3:Deploy Services

Notes:

SUSE Cloud
Control and Compute Nodes

The virtual machines that constitute your cloud run on the compute nodes

VM templates can be created quite easily with SUSE Studio and are then integrated into the SUSE Cloud OpenStack Image Server (Glance)

Notes:

SUSE Cloud Roles

The Operator

Installs SUSE Cloud on the physical hardware

The Admin

Configures projects, users, images, resources

The User

Creates, changes, and destroys VM instances according to his computing needs

In SUSE Cloud, there are several high-level user roles (or viewpoints) that we need to discriminate:

SUSE Cloud OperatorInstalls and deploys SUSE Cloud, starting with bare-metal, then installing the operating system and the OpenStack components. For detailed information about the operator's tasks and how to solve them, refer to SUSE Cloud Deployment Guide.

SUSE Cloud AdministratorManages projects, users, images, flavors, and quotas within SUSE Cloud. For detailed information about the administrator's tasks and how to solve them, refer to SUSE Cloud Admin User Guide.

SUSE Cloud UserEnd user who launches and manages instances, can create snapshots, and use volumes for persistent storage within SUSE Cloud. For detailed information about the user's tasks and how to solve them, refer to SUSE Cloud End User Guide.

SUSE Cloud 4 Administration
Section 2: Node Deployment

Notes:

Objectives

Install the Admin Node

Install Cloud Nodes

Set Up Cloud Services on Controller and Compute Nodes

Notes:

Install the Admin Node

Notes:

Install the Admin Node

The Admin node is the first node you install

All other nodes are deployed by the Admin node

Notes:

Install the Admin Node

SLES 11 SP3

SUSE Cloud 4 (Add-on Product)

Provide update repositories (SUSE Manager, SMT, local)

Update the Admin node

Notes:

Prepare the Network

After running install-suse-cloud.sh the network setup cannot be changed!

Regarding the network configuration of the Cloud, everything needs to be prepared before running install-suse-cloud.sh!

Notes:

Default Network Setup: Overview

Connects Admin server and nodes

Connects virtual machines

Separate network for storage nodes

Access to cloud services (such as Dashboard) .2 - .127

Access to cloud instances, .129 - .254

In the above diagram the Admin Network is physically separate (dual mode), but the network ranges are the same when there is only one NIC per node (single mode).

Storage network and storage nodes are not covered in this course.

Default Network Setup: Detail

Network technologies used to provide the needed functionality for the above include:Network namespaces

Open vSwitch

GRE tunneling

Bridging

Some of these you may not be too familiar with, and therefore debugging networking issues may seem difficult.

These will be covered to some extent in the last section.

Initial Network Setup: YaST

The YaST Crowbar module allows to change some network settings (before running install-suse-cloud.sh)

Notes:

Initial Network Setup: /etc/crowbar/network.json

Instead of using YaST it is possible to change settings in /etc/crowbar/network.json directly, such as

Interface maps

Network modes

Network definitions

Notes:

network.json: General Structure

{
"attributes" : {
"network" : {
"mode" : "value",
"start_up_delay" : value,
"teaming" : { "mode": value },
"interface_map" : [

],
"conduit_map" : [

],
"networks" : {
},
}
}
}
}

General attributes: mode, start_up_delay, teaming, networks

interface_map: Defines the order in which the physical network interfaces are to be used.

conduit_map: Defines the network modes and the network interface usage.

networks: Network definition.

Note: The order in which the entries in the network.json file appear may differ from the one listed above. Use your editor's search function to find certain entries.

network.json: Global Attributes

"mode" : "single",
"start_up_delay" : 30,
"teaming" : { "mode": 5 },mode: Network layout one NIC per node, two nics per node, bonded interfaces

start_up_delay: Defines the length of time until a timeout

teaming: Linux bonding mode

The most important options to define in the global attributes section are the default values for the network and bonding modes.

start_up_delay:Time (in seconds) the Chef-client waits for the network interfaces to become online before running into a time-out.

mode: Network mode. Defines the configuration name (or name space) to be used from the conduit_map. This allows to define multiple configurations (single, dual, and team are pre-configured) and switch them by changing this parameter.

teaming: Default bonding mode. See https://www.kernel.org/doc/Documentation/networking/bonding.txt for a list of available modes.

Bonding Mode 6 (balance-alb) is not supported. Adaptive load balancing (balance-alb or 6) is not supported because of problems with bridges and Open vSwitch.

network.json: Interface Map

"interface_map" : [
{"pattern" : "PowerEdge R610","serial_number" : "0x02159F8E","bus_order" : ["0000:00/0000:00:01","0000:00/0000:00:03"]}
],Defines the sequence of interfaces on the nodes (eth0, eth1, etc.)

PXE boot interface must be listed first

Interface Map changes are allowed after having run the SUSE Cloud installation script, but you then have to make them in the Crowbar Web interface.

By default physical network interfaces are used in the order they appear under /sys/class/net/. In case you want to apply a different order, you need to create an interface map where you can specify a custom order of the bus IDs. Interface maps are created for specific hardware configurations and are applied to all machines matching this configuration.

pattern: Hardware specific identifier. This identifier can be obtained by running the command dmidecode -s system-product-name on the machine you want to identify. You can log in to a node during the hardware discovery phase (when booting the SLEShammer image) via the Administration Server.

serial_number: Additional hardware specific identifier. This identifier can be used in case two machines have the same value for pattern, but different interface maps are needed. Specifying this parameter is optional (it is not included in the default network.json file). The serial number of a machine can be obtained by running the command dmidecode -s system-serial-number on the machine you want to identify.

bus_order: Bus IDs of the interfaces. The order in which they are listed here defines the order in which Chef addresses the interfaces. The IDs can be obtained by listing the contents of /sys/class/net/.

[Notes only slide]

Interface Map Example

Changing the Network Interface Order on a Machine with four NICs

Get the machine identifier by running the following command on the machine to which the map should be applied:

~ # dmidecode -s system-product-nameAS 2003R

The resulting string needs to be entered on the pattern line of the map. It is interpreted as a Ruby regular expression (see http://www.ruby-doc.org/core-2.0/Regexp.html for a reference). Unless the pattern starts with ^ and ends with $ a substring match is performed against the name returned from the above command.

List the interface devices in /sys/class/net to get the current order and the bus ID of each interface:

~ # ls -lgG /sys/class/net/ | grep ethlrwxrwxrwx 1 0 Jun 19 08:43 eth0 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.0/net/eth0lrwxrwxrwx 1 0 Jun 19 08:43 eth1 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.1/net/eth1lrwxrwxrwx 1 0 Jun 19 08:43 eth2 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.2/net/eth2lrwxrwxrwx 1 0 Jun 19 08:43 eth3 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.3/net/eth3

The bus ID is included in the path of the link targetit is the following string: ../../devices/pciBUS ID/net/eth0

Create an interface map with the bus ID listed in the order the interfaces should be used. Keep in mind that the interface from which the node is booted using PXE must be listed first. In the following example the default interface order has been changed to eth0, eth2, eth1 and eth3.

{"pattern" : "AS 2003R","bus_order" : ["0000:00/0000:00:1c.0/0000:09:00.0","0000:00/0000:00:1c.0/0000:09:00.2","0000:00/0000:00:1c.0/0000:09:00.1","0000:00/0000:00:1c.0/0000:09:00.3"]}

network.json: Network Modes

Pre-defined network modes:single
Only use the first interface for all networks. VLANs will be added on top of this single interface.

dual
Use the first interface as the admin interface and the second one for all other networks. VLANs will be added on top of the second interface.

team
Bond first two interfaces. VLANs will be added on top of the bond.

Apart from these modes a fallback mode ".*/.*/.*" is also pre-definedit is applied in case no other mode matches the one specified in the global attributes section. These modes can be adjusted according to your needs. It is also possible to define a custom mode.

The mode name that is specified with mode in the global attributes section is deployed on all nodes in SUSE Cloud. It is not possible to use a different mode for a certain node. However, you can define "sub" modes with the same name that only match machines with a certain number of physical network interfaces or machines with certain roles (all Compute Nodes for example).

network.json: Logical Interfaces

"conduit_map" : [

{
"pattern" : "team/.*/.*",
"conduit_list" : {
"intf2" : {
"if_list" : ["1g1","1g2"],
"team_mode" : 5
},
"intf1" : {
"if_list" : ["1g1","1g2"],
"team_mode" : 5
},
"intf0" : {
"if_list" : ["1g1","1g2"],
"team_mode" : 5
}
} Network conduits define mappings for logical interfaces

Network conduits define mappings for logical interfacesone or more physical interfaces bonded together. Each conduit can be identified by a unique name, the pattern. This pattern is also referred to as "Network Mode" (see previous slide) in the SUSE Cloud documentation.

The above example concerns the team mode, and specifies that the intf2 bond consists of the first and second 1G interfaces, teamed in mode 5 (balance-tlb, adaptive transmit load balancing).

network.json: Network Definitions

"admin" : {
"conduit" : "intf0",
"add_bridge" : false,
"use_vlan" : false,
"vlan" : 100,
"router_pref" : 10,
"subnet" : "192.168.124.0",
"netmask" : "255.255.255.0",
"router" : "192.168.124.1",
"broadcast" : "192.168.124.255",
"ranges" : {
"admin" : { "start" : "192.168.124.10",
"end" : "192.168.124.11" },
"switch" : { "start" : "192.168.124.241",
"end" : "192.168.124.250" },
"dhcp" : { "start" : "192.168.124.21",
"end" : "192.168.124.80" },
"host" : { "start" : "192.168.124.81",
"end" : "192.168.124.160" }
}
},

The network definitions contain IP address assignments, the bridge and VLAN setup and settings for the router preference. Each network is also assigned to a logical interface defined in the network conduit section. In the following the network definition is explained using the above example of the admin network definition:

conduit: Logical interface assignment. The interface must be defined in the network conduit section and must be part of the active network mode.

add_bridge: Bridge setup. Do not touch. Should be false for all networks.

use_vlan: Create a VLAN for this network. Changing this setting is not recommended.

vlan: ID of the VLAN. Change this to the VLAN ID you intend to use for the specific network if required. This setting can also be changed using the YaST Crowbar interface. The VLAN ID for the nova-floating network must always match the one for the public network.

router: Router preference, used to set the default route. On nodes hosting multiple networks the router with the lowest router_pref becomes the default gateway. Changing this setting is not recommended.

ranges: Network address assignments. These values can also be changed by using the YaST Crowbar interface.

Note: The deployment Guide has additional information on the VLAN settings under the Network Definitions heading

Run install-suse-cloud

The final step of the Admin node installation is to run screen install-suse-cloud

View the log file as needed in a second terminal window with
tail -f /var/log/crowbar/install.log

Except for the interface map, the network configuration cannot be changed after having run the script

If you need to change the interface map after having run install-suse-cloud, you need to do it in the admin web interface, not in the file directly

Notes:

SUSE Cloud 4 Admin Appliance

The SUSE Cloud 4 Admin Appliance used in the course consists of

SLES 11 SP3

SUSE Cloud 4 added as an add-on product

Repositories at their proper place within the file system, including installation repositories and update repositories (as of October 2014)

You do not have to go through the installation of the above as you have to when setting up a SUSE Cloud 4 Admin server on physical hardware

Notes:

Lab 2.1: Set Up Your Student Machine for the Course

SummaryIn this exercise, you will prepare your machine pre-installed with SLES 11 SP3 to allow to run SUSE Cloud 4 in a virtual environment, and set up the Admin node appliance.

Special Instructions(none)

Duration: 30 min.

comp-node

Must be running

controller

admin

Lab Notes:

Install Cloud Nodes

Notes:

Crowbar Web Interface

Unless you changed the IP numbering, the Web interface can be accessed athttp://192.168.124.10:3000

Username: crowbar

Password: crowbar (default, unless changed)

The nodes appear in the Admin Web interface with a name based on their MAC address. Therefore you need to know which MAC address belongs to which node to make sure that you later deploy the services to the intended nodes.

In the Web interface, you can define an alias for the hostname so it is easier to deploy the OpenStack services to the correct node.

After the initial boot and before allocating a node it is possible to log in to the node with ssh. This might be needed if you need to establish the sequence the NICs appear in /sys/class/net to modify /etc/crowbar/network.json.

Initial Boot

Set the BIOS of the Cloud nodes to boot from the network (PXE) as the primary boot option

Boot

The node fetches an initial image (called SLESHammer) via tftp from the Admin node and boots it

Nodes appear in the Crowbar Web Interface as discovered (yellow bullet) with a name based on their MAC addresses

Notes:

Discovered Nodes

Discovered nodes in the Crowbar Web interface:

Although this step is optional, it is recommended to properly group your nodes at this stage, since it lets you clearly arrange all nodes. Grouping the nodes by role would be one option, for example control, compute, object storage (Swift), and block storage (Ceph).

Enter the name of a new group into the New Group text box and click Add Group.

Drag and drop a node onto the title of the newly created group. Repeat this step for each node you want to put into the group.

To view details about a node, set an alias and allocate it, click on its entry. The dialog shown on the next slide appears.

Edit Node Details

We recommend to set an alias for nodes so you can recognize them more easily than by their MAC address alone.

Setting an Intended Role will also facilitate service deployment, as the nodes with suitable roles will be suggested in the service configuration dialogs.

When done with your changes, click Save. If you want to save the settings and also allocate the node, click Allocate. Allocating means that the node will be installed with SLES 11 SP3 and becomes part of your cloud resources.

During the installation, in the Crowbar Dashboard the button in front of the node name will appear yellow/green, and turn into green (ready) once the installation is complete.

Bulk Edit

You can also change some of the settings for several nodes at once and allocate them by clicking Nodes > Bulk Edit. The dialog that appears is shown above.

Deploying a large number of nodes in bulk mode will cause heavy load on the Administration Server. The subsequent concurrent Chef client runs triggered by the nodes will require a lot of RAM on the Administration Server.

Therefore it is recommended to limit the number of concurrent "Allocations" in bulk mode. The maximum number depends on the amount of RAM on the Administration Serverlimiting concurrent deployments to five up to ten is recommended.

Installation

After allocation, the node reboots and gets installed using AutoYaST

Note:
The /var/adm/autoinstall/init.d/crowbar_join script that runs at the end of the installation can take quite a bit of time to complete

See the FAQs in the Deployment Guide for more troubleshooting information

Installed nodes can be accessed from the Admin node without a password using ssh

All nodes are installed using AutoYaST with the same configuration located at /opt/dell/chef/cookbooks/provisioner/templates/default/autoyast.xml.erb.

If this configuration does not match your needs (for example if you need special third party drivers) you need to make adjustments to this file.

An AutoYaST manual can be found at http://www.suse.com/documentation/sles11/book_autoyast/data/book_autoyast.html. Having changed the AutoYaST configuration file, you need to re-upload it to Chef, using the following command:

knife cookbook upload -o /opt/dell/chef/cookbooks/ provisioner

Install Cloud Nodes

After the installation, the Cloud Node appears in the Web interface as ready

You can now deploy the services to the nodes based on their intended roles using so-called Barclamps (sets of recipes, templates, and installation instructions)

These Barclamps have to be deployed in a specific sequence

Notes:

Changes to Allocated Nodes

Reinstall

Deallocate

Forget

Reboot

Shutdown

Warning: When deallocating nodes that provide essential services, the complete cloud will become unusable.

All nodes that have been allocated can be decommissioned or re-installed. Click a node's name in the Node Dashboard to open a screen with the node details. The following options are available:

ReinstallTriggers a reinstallation. The machine stays allocated.

DeallocateTemporarily removes the node from the pool of nodes. After you reallocate the node it will take its former role. Useful for adding additional machines in times of high load or for decommissioning machines in times of low load.

ForgetDeletes a node from the pool. If you want to re-use this node again, it needs to be reallocated and re-installed from scratch.

RebootReboots the node.

ShutdownShuts the node down.

When deallocating nodes that provide essential services, the complete cloud will become unusable. While it is uncritical to disable single storage nodes (provided you have not disabled redundancy) or single compute nodes, disabling Control Node(s) will cause major problems. It will either "kill" certain services (for example Swift) or, at worst (when deallocating the Control Node hosting Neutron) the complete cloud. You should also not disable nodes providing Ceph monitoring services or the nodes providing swift ring and proxy services.

Lab 2.2: Deploy Cloud Nodes

SummaryIn this exercise, you will deploy two cloud nodes.

Special Instructions(none)

Duration: 60 min.

comp-node

Must be running

controller

admin

Must be running

Must be running

Lab Notes:

Set Up Cloud Services

Notes:

Barclamps

A Barclamp is a set of recipes, templates, and installation instructions

You can reach them from the Crowbar Web interface: Barclamps > All Barclamps

Create a proposal and set the values.

Apply the proposal (or save it and apply it later)

Wait for one Barclamp deployment to succeed before applying the next

Notes:

Service Configuration Sequence

The Services have to be set up in a predefined sequence

Database

RabbitMQ

Keystone

Ceph, Swift (optional)

Glance

Cinder

Neutron

Nova

Horizon

If you deploy a HA setup, you would start with the HA configuration.

Each of the above is done by configuring and applying an OpenStack Barclamp.

All Barclamps

This screenshot was taken after the deployment of several OpenStack services, as indicated by the green bullets under Status.

Database

First service to deploy

The Database service is used by all other services

PostgreSQL

Deployed to a control node

Proposed node for deployment based on the intended role set for nodes during deployment

To change the suggestion, delete the node from the right list and drag and drop another node from the available nodes list to the database-server list

The very first service that needs to be deployed is the Database. The database service is using PostgreSQL and is used by all other services. It must be installed on a Control Node. The Database can be made highly available by deploying it on a cluster.

The only attribute you may change is the maximum number of database connections (Global Connection Limit ). The default value should work in most casesonly change it for large deployments in case the log files show database connection failures.

Database

RabbitMQ

Enables services to communicate with the other parts of the same service on the same or other nodes via Advanced Message Queue Protocol (AMQP).

Deployed to a control node

Deploying RabbitMQ

The RabbitMQ messaging system enables services to communicate with the other parts of the same service on the same or other nodes via Advanced Message Queue Protocol (AMQP). Deploying it is mandatory. RabbitMQ needs to be installed on a Control Node. RabbitMQ can be made highly available by deploying it on a cluster. It is recommended not to change the default values of the proposal's attributes.

Virtual HostName of the default virtual host to be created and used by the RabbitMQ server (default_vhost configuration option in rabbitmq.config).

PortPort the RabbitMQ server listens on (tcp_listeners configuration option in rabbitmq.config).

UserRabbitMQ default user (default_user configuration option in rabbitmq.config).

Note: The communication between services uses HTTP and REST APIs.

RabbitMQ

Notes:

Keystone

Provides authentication and authorization services

Used by all other OpenStack services

Deployed to a control node

Deploying Keystone

Keystone is another core component that is used by all other OpenStack services. It provides authentication and authorization services. Keystone needs to be installed on a Control Node. Keystone can be made highly available by deploying it on a cluster. You can configure the following parameters of this barclamp:

Algorithm for Token GenerationSet the algorithm used by Keystone to generate the tokens. It is strongly recommended to use PKI, since it will reduce network traffic.

Default Credentials: Default TenantTenant for the users. Do not change the default value of openstack.

Default Credentials: Regular User/Administrator User Name/PasswordUser name and password for the regular user and the administrator. Both accounts can be used to log in to the SUSE Cloud Dashboard to manage Keystone users and access.

Keystone

Notes:

Ceph, Swift (Optional)

Ceph adds a redundant block storage service to SUSE Cloud

Swift adds an object storage service to SUSE Cloud that lets you store single files such as images or snapshots.

Ceph and Swift are not covered in this course

Notes:

Glance

Glance provides discovery, registration, and delivery services for virtual disk images

An image is needed to start an instanceit contains the pre-installed root-partition

All images used in the Cloud are provided by Glance

Deployed on a control node

Deploying Glance

Glance provides discovery, registration, and delivery services for virtual disk images. An image is needed to start an instanceit is its pre-installed root-partition. All images you want to use in your cloud to boot instances from, are provided by Glance. Glance must be deployed onto a Control Node. Glance can be made highly available by deploying it on a cluster.

There are a lot of options to configure Glance. The most important ones are explained belowfor a complete reference refer to http://github.com/crowbar/crowbar/wiki/Glance--barclamp.

Image Storage: Default Storage Backend

Choose whether to use Swift or Ceph (Rados) to store the images. If you have deployed neither of these services, the images can alternatively be stored in an image file on the Control Node (File). If you have deployed Swift or Ceph, it is recommended to use it for Glance as well.

If using VMware as a hypervisor, it is recommended to use it for storing images, too (VMware). This will make starting VMware instances much faster.

Depending on the storage back-end, there are additional configuration options available:

FileImage Storage: Image Store DirectorySpecify the directory to host the image file.

[Notes only slide]

SwiftImage Storage: Swift ContainerSet the name of the container to use for the images in Swift.

Rados (Ceph)RADOS User for CephX AuthenticationIf using a SUSE Cloud internal Ceph setup, the user you specify here is created in case it does not exist. If using an external Ceph cluster, specify the user you have set up for Glance. RADOS Pool for Glance imagesIf using a SUSE Cloud internal Ceph setup, the pool you specify here is created in case it does not exist. If using an external Ceph cluster, specify the pool you have set up for Glance.

VMwarevCenter IP AddressIP address of the vCenter server. vCenter User name / vCenter PasswordvCenter login credentials. Datastore NameThe name of the data store on the vCenter server.

SSL Support: ProtocolChoose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, refer to SSL Support: Protocol for configuration details.

API: Bind to All AddressesSet this option to true to enable users to upload images to Glance. If unset, only the operator will be able to upload images.

CachingEnable and configure image caching in this section. By default, image caching is disabled. Learn more about Glance's caching feature at http://docs.openstack.org/developer/glance/cache.html.

Database: SQL Idle TimeoutTime after which idle database connections will be dropped.

Logging: Verbose LoggingShows debugging output in the log files when set to true.

Glance

Notes:

Cinder

Provides (persistent) volume block storage

It adds persistent storage to an instance that will persist until deleted (independent from instances deleting an instance does not delete the volumes attached to it)

Deploying Cinder

Cinder, the successor of Nova Volume, provides volume block storage. It adds persistent storage to an instance that will persist until deleted (contrary to ephemeral volumes that will only persist while the instance is running).

Cinder can provide volume storage by using a local file, one or more local disks, Ceph (RADOS) or network storage solutions from NetApp, EMC, or EqualLogic. Using a local file is not recommended for production systems for performance reasons.

The attributes that can be set to configure Cinder depend on the Type of Volume. The only general option is SSL Support: Protocol.

Cinder

Notes:

Neutron

Neutron provides network connectivity between interface devices managed by other OpenStack services

The default plug-in is Open vSwitch with GRE
(see the deployment guide for other options)

Deployed on a control node

Deploying Neutron

Neutron provides network connectivity between interface devices managed by other OpenStack services (most likely Nova). The service works by enabling users to create their own networks and then attach interfaces to them.

Neutron must be deployed on a Control Node.

GRE: Generic Routing Encapsulation, a tunneling protocol developed by Cisco Systems to encapsulate a wide variety of network protocols

Neutron

Notes:

Nova

Key SUSE Cloud management service

Nova is the service that determines which instance will be started where in the Cloud, based on Hypervisor needed and available resources

The Nova service consists of six different roles:

nova-multi-controller: Deployed on a control node

nova-multi-compute-hyperv / nova-multi-compute-kvm / nova-multi-compute-qemu / nova-multi-compute-vmware / nova-multi-compute-xen: Provide the hypervisors (Hyper-V, KVM, QEMU, VMware vSphere and Xen) and tools needed to manage the instances: Deployed on compute nodes

Deploying Nova

Nova provides key services for managing the SUSE Cloud, sets up the Compute Nodes. SUSE Cloud currently supports KVM, Xen and Microsoft Hyper V and VMware vSphere. The unsupported QEMU option is included to enable test setups with virtualized nodes. The following attributes can be configured for Nova:

Scheduler Options: Virtual RAM to Physical RAM allocation ratioSet the "overcommit ratio" for RAM for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment. Changing this value is not recommended.

Scheduler Options: Virtual CPU to Physical CPU allocation ratioSet the "overcommit ratio" for CPUs for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment.

Live Migration Support: Enable Libvirt MigrationAllows to move KVM and Xen instances to a different Compute Node running the same hypervisor (cross hypervisor migrations are not supported). Useful when a Compute Node needs to be shut down or rebooted for maintenance or when the load of the Compute Node is very high. Instances can be moved while running (Live Migration).

Nova

Notes:

Horizon (OpenStack Dashboard)

The Web interface for the Cloud administrator and Cloud users

Cloud administrators users interact with the Cloud using this Web interface provided by Horizon for tasks, including

Creating projects and users

Administering images

Starting and stopping instances

Deployed on a control node

Deploying Horizon (OpenStack Dashboard)

The last service that needs to be deployed is Horizon, the OpenStack Dashboard. It provides a Web interface for users to start and stop instances and for administrators to manage users, groups, roles, etc. Horizon should be installed on a Control Node. To make Horizon highly available, deploy it on a cluster.

Horizon (OpenStack Dashboard)

Notes:

Lab 2.3: Deploy Cloud Services

SummaryIn this exercise, you will deploy the Cloud services to your Cloud nodes

Special Instructions(none)

Duration: 45 min.

comp-node

Must be running

controller

admin

Must be running

Must be running

Lab Notes:

SUSE Cloud 4 Administration
Section 3: Administer and Use SUSE Cloud 4

Notes:

Objectives

Create Virtual Machine Images

Import Images into the SUSE Cloud Environment

Administer and Use SUSE Cloud

Notes:

Create Virtual Machine Images

Notes:

Create a Virtual Machine Image

Use SUSE Studio

Build a KVM image on a local machine

Images are used as a template from which a virtual machine can be started. In other words, changes within the running virtual machine do not change the image.

SUSE Studio (1/9)

susestudio.com

Log in using one of several accounts, such as Novell, Google, Twitter

Initial screen:

Notes:

SUSE Studio (2/9)

Click Create new appliance to get started, then choose a template

Notes:

SUSE Studio (3/9)

Work through the tabs from left to right

Notes:

SUSE Studio (4/9)

Add software as needed

The software you add depends on what you want to use your virtual machine instances for.

A recommendation would be to add the iproute2 package, as the tools in it (such as ip) help to debug networking issues.

SUSE Studio (5/9)

Add users and configure different aspects of your virtual machine

Notes:

SUSE Studio (6/9)

Configure RAM, disk image size, as needed

Notes:

SUSE Studio (7/9)

Add a line to avoid a dialog when the system boots

In SUSE Cloud, you cannot interact with the virtual machine during the boot process, however images from SUSE Studio include a dialog that you would need to confirm to complete the boot process. The line shown above avoids this.

SUSE Studio (8/9)

Make sure to select qcow2 as the image format

Notes:

SUSE Studio (9/9)

Click Build to create your image, then download it to your local drive

The download link appears as soon as the build is complete.

Images are deleted after one week, but you can rebuild them again any time.

Build a KVM Image on a Local Machine

On a SLES/SLED 11/12 or openSUSE machine, install KVM and any needed tools

Use vm-install to install your virtual machine according to your needs

Make sure the image format is qcow2

Make sure networking is set to DHCP

Notes:

Import the Image into the Cloud Environment

Notes:

Import the Image into Glance

Copy the qcow2 image to the node running Glance

Source the .openrc file:source /root/.openrc

Enter a command similar to the following to import the image and to make it available for use:glance image-create --name="SLES12" \
--progress --is-public=True \ --container-format=bare \
--property architecture=x86_64 \
--property hypervisor_type=kvm \
--disk-format=qcow2 < myimage.qcow2

Images can also be uploaded via the SUSE Cloud dashboard. But as the Dashboard comes with some limitations with regards to image upload and modification of properties, it is recommended to use the glance command line tool.

The SUSE Cloud Admin User Guide lists the commonly used glance options, glance --help lists them all.

It is not possible to update the image contents to update the contents you have to delete the image and upload the changed version.

It is, however, possible to make changes to an instance, and then take a snapshot from that instance. The snapshot can then be used as image, and if the original image is no longer needed, you could delete it.

SUSE Cloud Dashboard

Access the SUSE Cloud Dashboard on the machine running Horizon (nova_dashboard-server role)

Default user name: admin; default password: crowbar

Notes:

SUSE Cloud Web Interface: Images

Images appear in the Web interface under Project > Compute > Images

Click Launch to create an instance from an image

Notes:

Update Images

Images cannot be updated directly

Two approaches:

Modify the disk image you imported as an image, delete the image in SUSE Cloud, import the modified image

Launch an instance from an image, make your modifications in the instance, take a snapshot of the instance, create an image from the snapshot

Notes:

SUSE Cloud Web Interface:
Launching an Instance

Work through the tabs from left to right

Make sure the that Flavor suits your image the root disk size cannot be smaller than the size of your image. If you select a too small Flavor, you get an error message when starting the machine (and unfortunately the error message is not really clear that the size of the Flavor does not fit your image).

Note: Be careful not to inadvertently click outside the white dialog in the browser, it will close the dialog without saving any settings.

SUSE Cloud Web Interface:
Launching an Instance

The Access & Security tab allows you to select firewall rules and a key pair for ssh

Creating the key pair and setting the firewall rules will be covered in the next objective.

You can create and assign firewall rules also later when the instance is already running. You will have to do that to access the instances with ssh, for instance, as by default, network access to instances is blocked.

SUSE Cloud Web Interface:
Launching an Instance

You need to add the fixed network; floating IPs are associated with the instance later

Click Launch to actually start the instance. It will take a moment to copy the image to the compute node. The instances dialog appears (see next slide).

On a compute node with KVM hypervisor, the disk images are located in /var/lib/nova/instances/instance_ID/.

SUSE Cloud Web Interface: Instances

Your instance appears in the list of instances with the name you gave it in the Launch Instance dialog

In the More drop-down menu, you can perform various actions, such as associate a Floating IP, reboot or shutdown the instance, etc.

Terminating an instance means it is shut down and its disk files are deleted from the compute node it runs on, including any ephemeral disks.

SUSE Cloud Web Interface: Instances

Click on an Instance Name to view more information on the instance

Notes:

SUSE Cloud Web Interface: Instances

The Console tab allows access to the virtual machine console

This way of access to a virtual machine is similar to accessing a virtual machine via virt-manager in Linux or the vCenter console in VMware.

Access the Instance from the Controller

To access the virtual machine from the Controller (or, more exactly, the machine running Neutron) with ssh, assign it a floating IP:Instances > More > Associate Floating IP

The Floating IP is the address that allows to access the virtual machine instance via the network, for instance from the controller.

Lab 3.1: Deploy and Use a VM Image Created in SUSE Studio

SummaryIn this exercise, you will have a look at https://susestudio.com, import a virtual machine into Glance, and launch it.

Special Instructions(none)

Duration: 20 min.

comp-node

Must be running

controller

admin

Must be running

Must be running

Lab Notes:

Administer and Use SUSE Cloud

Notes:

Manage Projects

Projects can be managed independently from each other

Users can be assigned to one or several projects

The SUSE Cloud administrator creates projects and user accounts

Notes:

Manage Projects

Create a project

Notes:

Manage Projects

Create a project

Projects can be temporarily disabled (and later enabled again) or completely deleted.

By selecting More > Edit Project for a specific project, you can change its properties, add users to or remove users from the project, or set quotas for various items, such as VCPUs or instances.

You can also change the role of a user. However, if you change the role of a user to Admin, he will have the Admin permissions across the Cloud, not just the current project.

Manage Users

Create a user account

If you assign a user the Admin role, he is admin across all projects.

Manage Users

Create a user account

User accounts can be temporarily disabled (and later re-enabled) or deleted completely by selecting the respective entries from the More drop-down menu.

Manage Users

Add an existing user to a project

In this dialog, you can also remove users from a project by clicking the icon on the right of a Project Member.

Manage Flavors

Flavors define the compute, memory, and storage capacity of nova computing instances

Think of it as the hardware configuration of a VM

Admin > System Panel > Flavors

Notes:

Manage Flavors

Click Edit Flavor to change settings
(In this screenshot, m1.small was changed from the default - less RAM, ephemeral disk, ID)

Ephemeral disks are deleted when the instance is terminated.

Manage Access & Security

The Access & Security dialog lets you

Manage security groups (configuration of packet filter rules)

Manage key pairs (needed to access instances with ssh that have no root password set)

Allocate or release floating IPs

Access the API

Download RC files

Notes:

Manage Access & Security

The Access & Security dialog lets you manage security groups, key pairs, allocate or release floating IPs, access the API, and download RC files

Notes:

Manage Access & Security

Manage rules

Notes:

Manage Access & Security

Manage rules

The parameters that can be differ depending on your selection from the drop-down menu.

Manage Access & Security

Manage rules

Notes:

Manage Access & Security

Manage keys

Notes:

Manage Access & Security

Manage keys

Notes:

Manage Access & Security

API access: OpenStack RC file

You can save the RC file and copy it to a node as needed. By sourcing it in a terminal various variables are set and you are prompted for the Cloud administrator password. This is needed to upload images to Glance, for instance.

On the controller it is available by default as /root/.openrc.

The information on this page tells you how to access the various APIs.

Manage Volumes

Project > Compute > Volumes

Notes:

Manage Volumes

Volumes can be attached to instances

Notes:

Manage Volumes

Volumes can be attached to instances

From within the instance, you can now partition the volume, create filesystems within the partitions, and mount them.

The volume remains until it is specifically deleted. When an instance is terminated, its disks get deleted, but the volume remains. You can attach it to another instance as needed.

Manage Snapshots

You can take snapshots of instances

Notes:

Lab 3.2: Create a VM Image Locally, then Deploy and Use It

SummaryIn this exercise, you will create a virtual machine image on your virtualization host, import it into Glance, create a project and a user, and launch the image.

Special Instructions(none)

Duration: 10 min.

comp-node

Must be running

controller

admin

Must be running

Must be running

Lab Notes:

SUSE Cloud 4 Administration
Section 4: Understand Cloud Networking

Notes:

Objectives

Understand Software Defined Networking (SDN)

Understand Network Namespaces

Understand Open vSwitch

Understand GRE

Understand Bridging

Understand iptables

SUSE Cloud networking is the result a complex interaction of tools and technologies.

It is not possible to cover all the above in depth in this course.

The purpose of this section is to give you a basic understanding of SUSE Cloud (or OpenStack, for that matter) networking to resolve some the networking mysteries you might encounter when beginning to work with SUSE Cloud, such as Why am I able to ping the floating IP of a VM without being able to see the IP in any output of the ip command, inside or outside of any VM?

Understand Software Defined Networking

Notes:

Software Defined Networking (SDN)

Networking in SUSE Cloud is highly dynamic and needs to be managed with software

Software-Defined Networking (SDN) Definition:The physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.(Source: www.opennetworking.org)

Notes:

SUSE Cloud and SDN

SUSE Cloud / OpenStack employs and combines several networking technologies available in Linux to meet the needs of a Cloud environment:

Nodes being added or removed

Virtual machines being added, started and shut down

As a matter of fact, networking within the Cloud is pretty complex!

The purpose of this Section is to help you understand what is going on

In normal operation, you don't need to type the commands shown in this section, SUSE Cloud 4 does the setup for you

Notes:

Understand Network Namespaces

Notes:

Network Namespace

A network namespace is logically another copy of the network stack, with its own routes, firewall rules, and network devices.

Notes:

Network Namespace

Without network namespaces, all processes see the same network information

eth0Process AProcess BIP: 10.11.12.13

Notes:

Network Namespace

With network namespaces, processes can be limited to separate IP addresses, routing tables, etc, independent from others

eth0Process AProcess B

Namespace 2

Namespace 1

IP: 10.12.13.14

IP: 10.14.15.16

Process C

br0

The connection from the namespace to the physical interface is accomplished with veth paris.

ip netns commands

View existing namespaces:ip netns

Create a new namespaceip netns add name

Execute commands in within the context of a namespaceip netns exec name command(See the notes for examples)

Example:

server1:~ # ip netns add namespace1server1:~ # ip netns exec namespace1 ip a 1: lo: mtu 65536 qdisc noop state DOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00server1:~ # ip netns exec namespace1 ip link set lo upserver1:~ # ip netns exec namespace1 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever

ip netns commands

To link the namespaces, you have to string a virtual network cable by creating a pair of virtual interfaces (veth):Syntax:ip link add veth-name1 type veth peer name veth-name2

Assign one side to a namespace:ip link set veth-name2 netns ns-name(See the notes for examples)

Example:

server1:~ # ip link add veth-defaultns type veth peer name veth-namespace1server1:~ # ip a...10: veth-namespace1: mtu 1500 qdisc noop state DOWN qlen 1000link/ether 9a:a2:8b:48:b2:3d brd ff:ff:ff:ff:ff:ff11: veth-defaultns: mtu 1500 qdisc noop state DOWN qlen 1000server1:~ # ip link set veth-namespace1 netns namespace1server1:~ # ip a...10: veth-namespace1: mtu 1500 qdisc noop state DOWN qlen 1000link/ether 9a:a2:8b:48:b2:3d brd ff:ff:ff:ff:ff:ff

The interface disappears from the default namespace and appears in the namespace1 namespace:

server1:~ # ip netns exec namespace1 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN ...10: veth-namespace1: mtu 1500 qdisc noop state DOWN qlen 1000link/ether 9a:a2:8b:48:b2:3d brd ff:ff:ff:ff:ff:ff

ip netns commands

Assign IP addresses to the interfaces:ip address add veth-name1 ...ip netns exec ns-name ip add add veth-name2 ...

Set the interface state to upip link set veth-name1 upip netns exec ns-name ip link set veth-name2 up

The veth-x interfaces are now be able to ping each other(See the notes for examples)

Example:

server1:~ # ip addr add 10.0.0.1/24 brd + dev veth-defaultserver1:~ # ip netns exec namespace1 ip add add 10.0.0.2/24 brd + dev veth-defaultnsserver1:~ # ip link set dev veth-namespace1 upserver1:~ # ip netns exec namespace1 ip set link dev veth-namespace1 upserver1:~ # ping 10.0.0.2PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 msserver1:~ # ip netns exec namespace1 ping 10.0.0.1PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms

ip netns commands

To reach other networks ...Turn on routingecho 1 > /proc/sys/net/ipv4/ip_forward

Set a default route in the namespace (pointing to the veth-peer IP address)ip netns exec ns-name ip route set default via defaultgwip

Add an iptables ruleiptables -t nat -A POSTROUTING -o interface -j MASQUERADE

You could also create a bridge and add the physical and the virtual interface to it to connect the namespace to the physical network

Example:

server1:~ # ip addr add 10.0.0.1/24 brd + dev veth-defaultserver1:~ # ip netns exec namespace1 ip addr add 10.0.0.2/24 brd + dev veth-defaultnsserver1:~ # ip link set dev veth-namespace1 upserver1:~ # ip netns exec namespace1 ip set link dev veth-namespace1 upserver1:~ # ping 10.0.0.2PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 msserver1:~ # ip netns exec namespace1 ping 10.0.0.1PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms

Note: You could also assign a physical interface to a specific namespace, but then it would no longer be available in the default namespace.

Lab 4.1: View Network Namespaces on the Controller Node

SummaryIn this exercise, you will have a look how namespaces are used in SUSE Cloud

Special Instructions(none)

Duration: 10 min.

comp-node

Must be running

controller

admin

Must be running

Must be running

Lab Notes:

View the namespaces

View the host ip configuration

View the IP configuration within the name spaces

Relate the IP configuration to the SUSE Cloud networks

Understand Open vSwitch

Notes:

Open vSwitch

What is Open vSwitch?

Open vSwitch is a multilayer software switch licensed under the open source Apache 2 license. Our goal is to implement a production quality switch platform that supports standard management interfaces and opens the forwarding functions to programmatic extension and control.(Source: Open vSwitch documentation)

It supports OpenFlow and can be used together with hypervisors to connect virtual machines on the same or different hosts across a network with other machines or the outside world

OpenFlow OpenFlow allows remote administration of a switch's packet forwarding tables, by adding, modifying and removing packet matching rules and actions. This way, routing decisions can be made periodically or ad hoc by the controller and translated into rules and actions with a configurable lifespan, which are then deployed to a switch's flow table, leaving the actual forwarding of matched packets to the switch at wire speed for the duration of those rules. (Source: wikipedia.org):

Open vSwitch

Core components

ovs-vswitchd

ovsdb-server

ovs kernel module

Tools:

ovs-dpctl

ovs-vsctl

ovs-appctl

ovs-ofctl

The main components of this distribution are:ovs-vswitchd, a daemon that implements the switch, along with a companion Linux kernel module for flow-based switching.

ovsdb-server, a lightweight database server that ovs-vswitchd queries to obtain its configuration.

ovs-dpctl, a tool for configuring the switch kernel module.

ovs-vsctl, a utility for querying and updating the configuration of ovs-vswitchd.

ovs-appctl, a utility that sends commands to running Open vSwitch daemons.

Open vSwitch also provides some tools:ovs-ofctl, a utility for querying and controlling OpenFlow switches and controllers.

ovs-pki, a utility for creating and managing the public-key infrastructure for OpenFlow switches.

ovs-testcontroller, a simple OpenFlow controller that may be useful for testing (though not for production).

A patch to tcpdump that enables it to parse OpenFlow messages.

(Source: Open vSwitch documentation)

ovs-vsctl

ovs-vsctl: ovs-vswitchd management utilityCommands (enter ovs-vsctl -h for a complete list):Create an Open vSwitch bridge:ovs-vsctl add-br bridgenameExample: ovs-vsctl add-br ovs-br1

Show existing bridges:ovs-vsctl show

Delete an Open vSwitch bridgeovs-vsctl del-br bridgename

ovs-br1OVS bridge

Notes:

ovs-vsctl

Open vSwitch control:Add an interface to the bridge:ovs-vsctl add-port bridgename interfacename

Example: ovs-vsctl add-port ovs-br1 if1

Remove IP address from physical interfaces connected to the vSwitch bridge, and add IP address to the vSwitch bridge, just as you would do with Linux bridges

Show information from the databaseovs-vsctl list databasetable

if1ovs-br1OVS bridge

Notes:

ovs-ofctl

Open vSwitch control:View OpenFlow flowsovs-ofctl dump-flows bridgename

The default flow entry makes Open vSwitch act like a normal layer 2 switch, forwarding traffic between its ports. Using the ovs-ofctl command, you can add, modify, and remove OpenFlow entries

Understand GRE

Notes:

GRE

Generic Routing Encapsulation (GRE) is a tunneling protocol developed by Cisco Systems that can encapsulate a wide variety of network layer protocols inside virtual point-to-point links over an Internet Protocol internetwork.

As an example, GRE tunnels can be set up with the ip tunnel command

Notes:

GRE

In SUSE Cloud / OpenStack, GRE is used in conjunction with Open vSwitch to connect the virtual machines

This separates the traffic between VMs from the management traffic

eth0.400eth0.400.130.11

.130.10

br-tunnelpatch-intgre...

br-tunnelpatch-intgre...

OVS bridge

OVS bridge

Controller Node

Compute Node

Note: A graphic on a later slide will show the relationship to other networking components

In SUSE Cloud when using Open vSwitch with GRE, the Nova Public/Nova Floating IP packets are carried within GRE packets and thus are not visible on the wire directly. However, unlike in a typical VPN the packets are not encrypted with GRE.

SUSE Cloud / OpenStack does that for you, but to give you an idea, in Open vSwitch, you would first create a bridge (on both machines hosting the GRE tunnel endpoints):

ovs-vsctl add-br bridgename

Then add the GRE interfaces (again on both sides):

ovs-vsctl add-port bridgename grename -- set interface grename type=gre options:remote_ip=

To test, start VMs on each hypervisor, attach their interface to the bridge, assign them an IP address inside the VM different from the IPs used ouside and find out if the VMs can ping each other.

Understand Bridging

Notes:

Linux Bridge

A Linux bridge acts as a layer 2 switch

It is different from Open vSwitch bridges in

No support for OpenFlow

Interfaces and the bridge itself can be controlled with iptables packet filter rules

Traffic through the bridge can be controlled with ebfilter rules

Notes:

The brctl command

Create a bridge:brctl addbr bridgenameExample: brctl addbr br0

Add an interfacebrctl addif bridgename interfacenameExample: brctl addif br0 if0

br0Linux bridge

br0Linux bridge

if0Notes:

The brctl command

Bring the bridge upip link set bridgename up

Add an IP Address:

ip addr add IP brd + dev bridgename

View bridges and their interfacesbrctl show

Remove an interfacebrctl delif bridgename interfacename

Remove the bridgebrctl delbr bridgename

Notes:

Understand iptables

Notes:

iptables

Packet filtering is done by the kernelFiltering rules are set with the iptables command

Filter rules can be based on

Source and destination IP address

Source and destination port

Protocol

TCP flags

Connection state

Notes:

iptables

In addition to filtering packets, the iptables command is used to

Change source or destination IP addresses or ports (network address translation, NAT)

This is used in SUSE Cloud to connect the floating IPs to the virtual machines

Notes:

Understand SUSE Cloud Networking

Notes:

Default Network Setup: Overview

Connects Admin server and nodes

Connects virtual machines

Separate network for storage nodes

Access to cloud services (such as Dashboard) .2 - .127

Access to cloud instances, .129 - .254

In the above diagram the Admin Network is physically separate (dual mode), but the network ranges are the same when there is only one NIC per node (single mode).

Storage network and storage nodes are not covered in this course.

SUSE Cloud Networking

qbr
...br-public

eth0eth0.400eth0.300eth0eth0.400.124.82

.130.11

.124.81

.130.10

Management Network

router-namespace

qg....126.129/24.126.130/32 (floating IP).126.131/32(Floating IP)...

br-tunnelpatch-intbr-intpatch-tunDNAT iptables rules 126.130 123.7...

gre...

br-tunnelpatch-intbr-intpatch-tungre...

qbr...qvb...qvo...tap...

eth0.123.7

.126.2

VM1

Controller Node

Compute Node

qr....123.1

VM2

OVS bridge

OVS bridge

OVS bridge

OVS bridge

Linux bridge

OVS bridge

tap
...eth0.123.8

qvb
...qvo...

The above shows the network components involved in the connection of a controller node(the node running neutron) and a VM on a compute node.

Network mode: Single

Networking Barclamp deployed on the the Controller Node

qbr
...br-publicSUSE Cloud Networking

eth0eth0.400eth0.300eth0eth0.400.124.82

.130.11

.124.81

.130.10

Management Network

router-namespace

qg....126.129/24.126.130/32 (floating IP).126.131/32(Floating IP)...

br-tunnelpatch-intbr-intpatch-tunDNAT iptables rules 126.130 123.7...

gre...

br-tunnelpatch-intbr-intpatch-tungre...

qbr...qvb...qvo...tap...

eth0.123.7

.126.2

VM1

Controller Node

Compute Node

qr....123.1

VM2

OVS bridge

OVS bridge

OVS bridge

OVS bridge

Linux bridge

OVS bridge

tap
...eth0.123.8

qvb
...qvo...

ping 192.168.126.130

Ping to floating IP 192.168.126.130 from a terminal on the controller node:

br-public has the IP 192.168.126.2 and thus there is a route to the 192.168.126.0/24 network attached to it.

The port qg... is located in the router namespace and has the destination IP (.130), but the destination IP address of the packet gets changed to 192.168.123.7 by an iptables DNAT rule.

The routing entry within the router namespace for 192.168.123.0/24 packets says via the qr... port with IP 192.168.123.1.

The packet transverses the integration bridge br-int, enters the br-tunnel bridge and gets encapsulated in 192.168.130.0/24 packets and leaves the host via the VLAN with the ID 400

In these packets it reaches the destination compute node, gets unpacked.

On the compute node, it traverses the br-tunnel bridge, the br-int integration bridge (both Open vSwitch bridges), leaves it via a qvo port and enters a qbr... Linux bridge via a matching qvb port.

It leaves the Linux bridge via a tap interface and reaches its destination VM on its eth0 interface.

A Linux bridge is created for each VM, with additional qvo... ports attached to br-int.

Note: The in the above graphic represent unique IDs in the actual system.

Note: There is a qdhcp namespace on the Controller Node which is not included in the above graphic.

Default Network Setup: Detail

Notes:

Lab 5.2: Explore the SUSE Cloud Network Configuration

SummaryIn this exercise, you will view the current Open vSwitch and Linux bridge configuration on the Controller and Compute Node

Special Instructions(none)

Duration: 10 min.

comp-node

Must be running

controller

admin

Must be running

Must be running

Lab Notes:

SUSE Cloud 4 There is More to it

Notes:

SUSE Cloud 4 There is More to Learn

Commandline tools such as nova, neutron, cinder

HA

SUSE Manager integration

Heat

Ceilometer

Network nodes

Storage nodes Ceph, Swift

Some of these topics are covered in the SUSE Cloud 4 Advanced Administration course see https://www.suse.com/training/ for details.

Unpublished Work of SUSE. All Rights Reserved.This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE.
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE.
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.

General DisclaimerThis document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.

Klicken Sie, um das Format der Notizen zu bearbeiten