cumulus vx for a poc in pre-sales - amazon s3 · cumulus vx for a poc in pre-sales ... last login:...

16
Cumulus VX for a POC in pre-sales Using Cumulus VX to create a virtual POC environment.

Upload: lekiet

Post on 16-May-2018

215 views

Category:

Documents


1 download

TRANSCRIPT

Cumulus VX for a POC in pre-sales

Using Cumulus VX to create a virtual POC environment.

Contents Contents

Cumulus VX in pre-sales engagement

Introduction

Cumulus VX in a POC

Intended Audience

Installation and getting started

Reference topology

Prerequisites

Using the environment

Building an IP-Fabric

Topology diagram

Assignment 1: Backbone Interface configuration

Assignment 2: BGP Configuration

Assignment 3: Server access

Assignment 4: Automation

Additional information

Complex configuration and topology

Documentation

Training

Version 1.0.0

August 20, 2016

About Cumulus Networks

Unleash the power of Open Networking with Cumulus Networks. Founded by veteran networking engineers from Cisco and VMware, Cumulus Networks makes the first Linux operating system for networking hardware and fills a critical gap in realizing the true promise of the software-defined data center. Just as Linux completely transformed the economics and innovation on the server side of the data center, Cumulus Linux is doing the same for the network. It is radically reducing the costs and complexities of operating modern data center networks for service providers and businesses of all sizes. Cumulus Networks has received venture funding from Andreessen Horowitz, Battery Ventures, Sequoia Capital, Peter Wagner and four of the original VMware founders. For more information visit  cumulusnetworks.com or @cumulusnetworks .

©2016 Cumulus Networks. CUMULUS, the Cumulus Logo, CUMULUS NETWORKS, and the Rocket Turtle Logo (the “Marks”) are trademarks and service marks of

Cumulus Networks, Inc. in the U.S. and other countries. You are not permitted to use the Marks without the prior written consent of Cumulus Networks. The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. All other marks are used under fair use or license from their respective owners.

2

www.cumulusnetworks.com

Cumulus VX in pre-sales engagement

Introduction Cumulus Networks has released a virtual machine of the Network Operating System. Customers or other interested people can use this virtual machine to test the OS. Since the release of the VM the community has been using it for experimentation and development of tools and automation.

With the release of Cumulus Linux 3.0, Cumulus VX has been made into a separate platform with the same features as the NOS running on an Open Networking switch. The VM is still community supported, but the feature parity provides a platform to perform tests with the same limitations (except for hardware forwarding).

Cumulus VX in a POC For a proof-of-concept in most cases demo hardware is shipped to a customer and the necessary tests are done, before ordering the products. While a single VM doesn’t provide an additional value, using Cumulus VX a complete topology can be built. Using a topology of multiple Cumulus Linux VM’s and servers, customers and the SE’s can demo and test the environment without being limited by physical hardware.

In this session we will explain with hands-on experience how to use a Cumulus VX topology for a demo or POC situation.

Intended Audience This hand-out is developed for Cumulus partner engineers that are involved in pre-sales.

3

www.cumulusnetworks.com

Installation and getting started

Reference topology Typical datacenter networks have design based on Spine/Leaf topologies. These topologies are horizontally scalable and allow for minimal oversubscription on the backbone links. Traffic patterns in datacenters are mostly east/west traffic and a low oversubscription is beneficial for these patterns.

The above reference topology has been designed specifically for these type of demo or POC situations. It can be used to test both L2 and L3 configurations and also with overlay networks. By adding servers to this topology, also interaction with applications can be tested.

Building such a design by hand from separate VM’s in any environment can be challenging. Using Vagrant a topology like the above can be automatically created. This is what Cumulus has done and this setup will be used for this session.

Prerequisites The vagrant topology should be cloned from the Cumulus Git repository: https://github.com/CumulusNetworks/cldemo-vagrant

git clone https://github.com/cumulusnetworks/cldemo-vagrant

It can then be run on a Linux, OS X or Windows system with sufficient memory (8GB, preferred 16GB). To run the environment Git, Vagrant and Virtualbox should be installed. Detailed documentation can be found on the aforementioned Git repository including other methods of using the topology such as

4

www.cumulusnetworks.com

Using the environment When the topology is cloned, it can be started with the following commands:

cd cldemo-vagrant vagrant up

You should then see the environment booting and provisioning. In the default Virtualbox environment, this is done in serial and it might take a while. When the booting is done, you can request the status of the environment in the following way:

demouser@cumulus-vx:~/cumulus/cldemo-vagrant$ vagrant status Current machine states: oob-mgmt-server running (libvirt) oob-mgmt-switch running (libvirt) exit02 running (libvirt) exit01 running (libvirt) spine02 running (libvirt) spine01 running (libvirt) leaf04 running (libvirt) leaf02 running (libvirt) leaf03 running (libvirt) leaf01 running (libvirt) edge01 running (libvirt) server01 running (libvirt) server03 running (libvirt) server02 running (libvirt) server04 running (libvirt) internet running (libvirt)

The virtual topology is completely managed from an “out-of-band” management server. All switches and servers can be managed from this machine. The first step to use this machine is to setup an SSH session and change to the cumulus user:

vagrant ssh oob-mgmt-server sudo su - cumulus

You can then check the connectivity to a leaf switch:

cumulus@oob-mgmt-server:~$ ssh leaf01

<motd>

Last login: Tue Aug 23 08:25:00 2016 from 192.168.0.254

cumulus@leaf01:~$

5

www.cumulusnetworks.com

Building an IP-Fabric

Topology diagram When logged in to the oob-mgmt-server you have access to all the hosts as shown in the following diagram. The switches and servers have been provisioned with multiple interfaces to build different logical configurations. Since this is a virtual environment, only the used interfaces have been provisioned. This results in a leaf switch that for example has 10 ports with names adjusted to the topology usage.

As explained in the previous chapter, you will be working from the oob-mgmt-server. This is a virtual Ubuntu machine with the necessary tools installed to function as a management server. The ssh-key of the cumulus user has been provisioned to all switches and servers to allow password-less login.

The servers are Ubuntu machines that can run applications or can function as a test host in the network. The “edge01” server is connected so it can be used for loadbalancer of firewall functionalities and forward traffic to the internal network.

6

www.cumulusnetworks.com

Assignment 1: Backbone Interface configuration We will be building an IP-Fabric design based on BGP Unnumbered in this design. The first step will be configuring the backbone interfaces of spine01, leaf01 and leaf02.

Login to the mentioned switches by ssh to change the interfaces configuration file. If you open the interfaces configuration (/etc/network/interfaces) using your favorite editor (don’t forget sudo), you will see an almost empty configuration.

auto lo

iface lo inet loopback

auto eth0

iface eth0 inet dhcp

For each related backbone interface (refer to the diagram), add two lines. And set a unique (for all hosts) ip-address on the loopback interface.

auto lo

iface lo inet loopback

address 10.0.0.1/32

auto swpX

iface swpX

Save the file, exit the editor and reload the interface configuration:

Ifreload -a

When you have reloaded the interfaces, you should be able to see the newly added interfaces as UP with an IPv6 link local address:

cumulus@leaf01:~$ ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet 10.0.0.1/32 scope global lo

valid_lft forever preferred_lft forever

7

www.cumulusnetworks.com

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

12: swp51: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 44:38:39:00:00:5b brd ff:ff:ff:ff:ff:ff

inet6 fe80::4638:39ff:fe00:5b/64 scope link

valid_lft forever preferred_lft forever

13: swp52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 44:38:39:00:00:2a brd ff:ff:ff:ff:ff:ff

inet6 fe80::4638:39ff:fe00:2a/64 scope link

valid_lft forever preferred_lft forever

Using the IPv6 link local address, you can test the connectivity between two of the switches. This is done to verify if the connectivity is as expected.

cumulus@spine01:~$ ping6 -I swp1 fe80::4638:39ff:fe00:5b -c 4

PING fe80::4638:39ff:fe00:5b(fe80::4638:39ff:fe00:5b) from fe80::4638:39ff:fe00:5c swp1: 56 data bytes

64 bytes from fe80::4638:39ff:fe00:5b: icmp_seq=1 ttl=64 time=1.40 ms

64 bytes from fe80::4638:39ff:fe00:5b: icmp_seq=2 ttl=64 time=0.739 ms

64 bytes from fe80::4638:39ff:fe00:5b: icmp_seq=3 ttl=64 time=0.713 ms

64 bytes from fe80::4638:39ff:fe00:5b: icmp_seq=4 ttl=64 time=0.976 ms

--- fe80::4638:39ff:fe00:5b ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3009ms

rtt min/avg/max/mdev = 0.713/0.958/1.407/0.280 ms

When leaf01, leaf02 and spine01 are configured according to the above steps, we can move on the the next assignment.

8

www.cumulusnetworks.com

Assignment 2: BGP Configuration Routing in Cumulus linux is done using the Quagga routing daemon. With the configuration done in the previous assignment, we’re now going to configure BGP unnumbered. On the leaf01, leaf02 and spine01 switches.

First step is to enable the daemons by editting the file /etc/quagga/daemons. Set the zebra and bgpd daemons to “yes”.

zebra=yes

bgpd=yes

ospfd=no

ospf6d=no

ripd=no

ripngd=no

isisd=no

Save, exit the file and start the daemons:

cumulus@leaf01:~$ sudo service quagga start

You can check if the daemons are started correctly:

cumulus@leaf01:~$ sudo service quagga status

quagga.service - Cumulus Linux Quagga

Loaded: loaded (/lib/systemd/system/quagga.service; enabled)

Active: active (running) since Tue 2016-08-23 20:20:34 UTC; 13s ago

Process: 792 ExecStop=/usr/lib/quagga/quagga stop (code=exited, status=0/SUCCESS)

Process: 2138 ExecStart=/usr/lib/quagga/quagga start (code=exited, status=0/SUCCESS)

CGroup: /system.slice/quagga.service

├─ 2154 /usr/lib/quagga/zebra -s 90000000 --daemon -A 127.0.0.1

├─ 2161 /usr/lib/quagga/bgpd --daemon -A 127.0.0.1

└─ 2167 /usr/lib/quagga/watchquagga -adz -r /usr/sbin/servicebBquaggabBrestartbB%s -s /usr/sbin/servicebBquaggabBstartbB%s -k /usr/sbin/servicebBquaggabBstopbB%s -b bB -t 30 zebra bgpd

Aug 23 20:20:34 leaf01 watchquagga[2167]: watchquagga 0.99.23.1+cl3u2 watching [zebra bgpd], mode [phased zebra restart]

Aug 23 20:20:35 leaf01 watchquagga[2167]: bgpd state -> up : connect succeeded

Aug 23 20:20:35 leaf01 watchquagga[2167]: zebra state -> up : connect succeeded

9

www.cumulusnetworks.com

When the Quagga daemon is started on the systems, you can access the Quagga modal interface. This is a Cisco like cli where you can manually configure the routing protocols on the system.

cumulus@leaf01:~$ sudo vtysh

Hello, this is Quagga (version 0.99.23.1+cl3u2).

Copyright 1996-2005 Kunihiro Ishiguro, et al.

leaf01#

leaf01#

On each interface configured backbone interface, set the RA interval to 3 seconds:

leaf01# conf t

leaf01(config)# int swpX

leaf01(config-if)# ipv6 nd ra-interval 3

Configure BGP unnumbered on each host and enabled on all interfaces:

leaf01(config)# router bgp 65002

leaf01(config-router)# bgp router-id 10.0.0.2

leaf01(config-router)# redistribute connected route-map redistribute-connected

leaf01(config-router)#

leaf01(config-router)# neighbor swpX interface

leaf01(config-router)# neighbor swpX remote-as external

leaf01(config-router)# neighbor swpX capability extended-nexthop

Add the route map to prevent the management network from being advertised:

leaf01(config)# route-map redistribute-connected deny 100

leaf01(config-route-map)# match interface eth0

leaf01(config-route-map)# route-map redistribute-connected permit 1000

10

www.cumulusnetworks.com

When leaf01, leaf02 and spine01 are configured according to the previous instructions, you should see that BGP sessions are established and routes to all neigbor loopback interfaces are availble. Also there is reachability between leaf01 and leaf02 over the spine switch.

spine01# sh ip bgp sum

BGP router identifier 10.0.0.31, local AS number 65001 vrf-id 0

BGP table version 3

RIB entries 5, using 600 bytes of memory

Peers 2, using 32 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

leaf01(swp1) 4 65002 9 11 0 0 0 00:00:10 1

leaf02(swp2) 4 65003 8 9 0 0 0 00:00:10 1

Total number of neighbors 2

spine01# sh ip route

Codes: K - kernel route, C - connected, S - static, R - RIP,

O - OSPF, I - IS-IS, B - BGP, T - Table,

> - selected route, * - FIB route

K>* 0.0.0.0/0 via 192.168.0.254, eth0

C>* 10.0.0.1/32 is directly connected, lo

B>* 10.0.0.2/32 [20/0] via fe80::4638:39ff:fe00:5b, swp1, 00:00:18

B>* 10.0.0.3/32 [20/0] via fe80::4638:39ff:fe00:2e, swp2, 00:00:19

C>* 192.168.0.0/24 is directly connected, eth0

cumulus@leaf01:~$ ip route show

default via 192.168.0.254 dev eth0

10.0.0.1 via 169.254.0.1 dev swp51 proto zebra metric 20 onlink

10.0.0.3 via 169.254.0.1 dev swp51 proto zebra metric 20 onlink

192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.11

cumulus@leaf01:~$ traceroute 10.0.0.3

traceroute to 10.0.0.3 (10.0.0.3), 30 hops max, 60 byte packets

1 10.0.0.1 (10.0.0.1) 0.835 ms 0.862 ms 0.861 ms

2 10.0.0.3 (10.0.0.3) 2.186 ms 2.122 ms 2.093 ms

11

www.cumulusnetworks.com

Assignment 3: Server access To provide access to server1 and server2 we have to configure bridges on both leaf01 and leaf02. This bridge will have an ip-address that the servers will use as default gateway.

Like in assignment 1, login to the leaf switches and edit the interface configuration file. Add a standard bridge configuration like shown in the example with a unique /24 subnet for the network. The bridge has one member port to the server that also needs to be activated.

auto bridge

iface bridge

bridge-ports swp1

address 172.30.1.1/24

auto swp1

iface swp1

Don’t forget to run “ifreload -a” after the changes have been made. When the changes have been made on both leaf switches, you will see a route to the other prefix:

172.30.1.0/24 via 169.254.0.1 dev swp51 proto zebra metric 20 onlink

Note the next hop address because of the BGP unnumbered configuration.

Next, login to the servers and configure an ip address on the interfaces connected to swp1 on the leaf switches. Since these are Ubuntu machines the configuration of is similar to a Cumulus Linux system. Both Linux distributions are based on debian. To configure the network interface, edit /etc/network/interfaces and add the interface configuration. Keep in mind that server02 is connected with leaf02 using interface eth2.

auto ethX

iface ethX inet static

address 172.30.1.10/24

gateway 172.30.1.1

After the configuration is added, restart networking. Due to the virtual environment, it might be necessary to reboot the virtual server after the configuration changes (“sudo reboot”) if there is no connectivity.

sudo /etc/init.d/networking restart

12

www.cumulusnetworks.com

When the host is correctly configured you should see its mac address on the leaf switch being learned on the connected swp port.

cumulus@leaf01:~$ brctl showmacs bridge

port name mac addr vlan is local? ageing timer

swp1 44:38:39:00:00:03 0 no 19.75

swp1 44:38:39:00:00:04 0 yes 0.00

Also there is reachability between server01 and server02:

cumulus@server01:~$ ping 172.30.2.10

PING 172.30.2.10 (172.30.2.10) 56(84) bytes of data.

64 bytes from 172.30.2.10: icmp_seq=1 ttl=61 time=2.87 ms

64 bytes from 172.30.2.10: icmp_seq=2 ttl=61 time=2.28 ms

64 bytes from 172.30.2.10: icmp_seq=3 ttl=61 time=1.87 ms

64 bytes from 172.30.2.10: icmp_seq=4 ttl=61 time=1.89 ms

--- 172.30.2.10 ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3004ms

rtt min/avg/max/mdev = 1.873/2.232/2.870/0.406 ms

13

www.cumulusnetworks.com

Assignment 4: Automation With the previous assignments you have configured reachability between server01 and server02 over a BGP unnumbered backbone. As you could see this is realized by editing two configurations, the interfaces configuration and the Quagga configuration. While the configuration is simpler, because of the improvements Cumulus has made, configuring/maintaining the complete reference topology or a real life large topology can be time consuming.

Because Cumulus Linux is a standard Linux distribution, management can also be don using automation tools like Puppet, Chef or Ansible. Detailed examples are available on the Cumulus website and Github repository.

As last step in this training we will use a demo ansible IP-frabric setup. First, check out the playbook from Github on the oob-mgmt-server:

cumulus@oob-mgmt-server:~$ git clone https://github.com/CumulusNetworks/cldemo-automation-ansible.git

Cloning into 'cldemo-automation-ansible'...

remote: Counting objects: 853, done.

remote: Total 853 (delta 0), reused 0 (delta 0), pack-reused 853

Receiving objects: 100% (853/853), 76.46 KiB | 0 bytes/s, done.

Resolving deltas: 100% (436/436), done.

Checking connectivity... done.

And run the ansible playbook:

cumulus@oob-mgmt-server:~$ cd cldemo-automation-ansible/

cumulus@oob-mgmt-server:~/cldemo-automation-ansible$ ansible-playbook run-demo.yml

If everything is successful you’ll see the playbook do several changes:

PLAY RECAP *********************************************************************

leaf01 : ok=8 changed=6 unreachable=0 failed=0

leaf02 : ok=8 changed=6 unreachable=0 failed=0

server01 : ok=3 changed=2 unreachable=0 failed=0

server02 : ok=6 changed=3 unreachable=0 failed=0

spine01 : ok=8 changed=6 unreachable=0 failed=0

spine02 : ok=8 changed=7 unreachable=0 failed=0

14

www.cumulusnetworks.com

By running the playbook you have configured the complete topology with an IP-fabric and connectivity between all servers in about 10 seconds. When you login to spine02, a previously unconfigured switch, you’ll see that Quagga is configured and has routes to all other networks:

spine02# sh ip route

Codes: K - kernel route, C - connected, S - static, R - RIP,

O - OSPF, I - IS-IS, B - BGP, T - Table,

> - selected route, * - FIB route

K>* 0.0.0.0/0 via 192.168.0.254, eth0

B>* 10.0.0.11/32 [20/0] via fe80::4638:39ff:fe00:2a, swp1, 00:06:13

B>* 10.0.0.12/32 [20/0] via fe80::4638:39ff:fe00:67, swp2, 00:06:13

C>* 10.0.0.22/32 is directly connected, lo

B>* 172.16.1.0/24 [20/0] via fe80::4638:39ff:fe00:2a, swp1, 00:06:13

B>* 172.16.2.0/24 [20/0] via fe80::4638:39ff:fe00:67, swp2, 00:06:13

C>* 192.168.0.0/24 is directly connected, eth0

15

www.cumulusnetworks.com

Additional information Complex configuration and topology In this session you’ve seen a quick introduction of the possibilities of Cumulus VX. The configurations that are shown are minimal viable configurations. For proof of concepts usually a more elaborate configuration is needed that wasn’t part of this session.

If the standard topology isn’t sufficient, Cumulus Networks has a tool available called the Topology Converter that can be used to generate a customer specific topology. The generator can be downloaded from the Github repository: https://github.com/cumulusnetworks/topology_converter

For these complex engagements, contact the Cumulus SE in your territory.

Documentation Alle operating system documentation and configuration examples can be found on our website in the documentation section: https://docs.cumulusnetworks.com/

During pre-sales engagements it is beneficial to look at the design of the network. The following design guide contains valuable information: go.cumulusnetworks.com/scalable-dcnetworks

Training Cumulus Networks offers 8hr (remote) bootcamps that will get you started with the Operating System and the configuration aspects. These bootcamps are available for all timezones and available for partners with a discount: https://cumulusnetworks.com/education/instructor-led-training/.

16

www.cumulusnetworks.com