development, deployment and analysis of a software...

183
UNIVERSIDAD POLITÉCNICA DE MADRID ESCUELA TÉCNICA SUPERIOR DE INGENIEROS DE TELECOMUNICACIÓN Máster Universitario en Ingeniería de Telecomunicación TRABAJO FIN DE MÁSTER DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE DEFINED NETWORKING TEST ENVIRONMENT FOR NETWORK TRAFFIC MONITORING Ignacio Domínguez Martínez-Casanueva 2018

Upload: others

Post on 21-Nov-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

UNIVERSIDAD POLITÉCNICA DE MADRID

ESCUELA TÉCNICA SUPERIORDE INGENIEROS DE TELECOMUNICACIÓN

Máster Universitario enIngeniería de Telecomunicación

TRABAJO FIN DE MÁSTER

DEVELOPMENT, DEPLOYMENT ANDANALYSIS OF A SOFTWARE DEFINEDNETWORKING TEST ENVIRONMENT

FOR NETWORK TRAFFICMONITORING

Ignacio Domínguez Martínez-Casanueva

2018

Page 2: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 3: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

UNIVERSIDAD POLITÉCNICA DE MADRID

ESCUELA TÉCNICA SUPERIORDE INGENIEROS DE TELECOMUNICACIÓN

DEVELOPMENT, DEPLOYMENT ANDANALYSIS OF A SOFTWARE DEFINEDNETWORKING TEST ENVIRONMENT

FOR NETWORK TRAFFICMONITORING

AUTOR: D. Ignacio DomínguezMartínez-Casanueva

TUTOR: Dr. David Fernández Cambronero

DEPARTAMENTO: Ingeniería de Sistemas Telemáticos

Page 4: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 5: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

iii

Resumen

Las Redes Definidas por Software (SDN) suponen un nuevo paradigma de redes en el cualse separan los planos de control y de datos. Esta arquitectura innovadora crea una capade abstracción sobre la infraestructura existente proporcionando gran interoperabilidad conaplicaciones y servicios externos; mientras que las funciones de control son extraídas delos dispositivos de red y trasladadas a un controlador central, lo que va a permitir a losoperadores de red programar la inteligencia de la red desde una vista global.

La monitorización del tráfico de red es un elemento clave para que los operadores de redpuedan gestionar las redes de forma eficiente. El sistema de monitorización proporcionainformación crítica a la inteligencia de red para poder asegurar la estabilidad y seguridaden los servicios y aplicaciones de red. Dicha información puede ser utilizada por diferentessistemas de gestión de red como son: el Sistema de Ingeniería de Tráfico (TE), el Sistema deProvisionamiento de la Calidad de Servicio (QoS), el Sistema de Detección de Anomalías,etc. El paradigma SDN mejora la monitorización del tráfico de red proporcionando una vistacompleta de la red a la par que permite inspeccionar en detalle todos los flujos transmitidos.

En este Trabajo Fin de Máster, se ha desarrollado y desplegado un entorno SDNempleando la herramienta de virtualización Virtual Networks over linuX (VNX), con elobjetivo de probar implementaciones de monitorización con SDN en redes de gran escala.VNX permite la creación de redes virtuales grandes y complejas de una forma flexible,consiguiendo solventar limitaciones de recursos físicos gracias a técnicas de virtualizacióncomo KVM y LXC. Se ha llevado a cabo un análisis de rendimiento con diferentes programasrelacionados con SDN empleando estas técnicas de virtualización. Este entorno permite alos usuarios ejecutar y probar rápidamente sus propios despliegues de proyectos SDN yNFV, como es el caso del marco SDN-Mon. El marco SDN-Mon se trata de un proyectoSDN que proporciona la monitorización de tráfico de red de una forma flexible y precisapara aplicaciones de controlador. SDN-Mon consigue una solución SDN eficiente para lamonitorización de redes de gran escala, superando desafíos como son la sobrecarga presenteen la comunicación entre el controlador y un conmutador de red, o la correcta utilizaciónde los recursos de memoria y cómputo en los conmutadores de red. Gracias al entorno SDNque se propone, el marco SDN-Mon es integrado con el objetivo de analizar sus rendimientoen redes de gran escala que posean diferentes topologías y configuraciones de red.

Por último, este proyecto va más allá extendiendo el Interfaz Norte (NBI) del controladorSDN, y posteriormente, presentando nuevas aplicaciones externas que se basan en lainformación proporcionada por el marco SDN-Mon con propósitos en seguridad. Así pues,este Trabajo Fin de Máster no se limita a la propuesta de un entorno SDN, sino quetambién inicia la creación de una solución SDN completa. De este modo se busca que estetrabajo pueda servir como base para futuros trabajos donde la información de monitorizaciónobtenida gracias al paradigma SDN puede ser empleada en aplicaciones más complejas condistintos objetivos y que son sugeridas a los lectores.

Palabras clave: Redes Definidas por Software, SDN, Monitorización, Virtualización,OpenFlow, Ryu, Lagopus, KVM, LXC, DPDK, VNX, API REST

Page 6: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 7: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

v

Abstract

Software Defined Networking (SDN) is a new network paradigm that decouples thenetwork control plane from the forwarding plane. This innovative architecture creates anabstraction layer over the underlying infrastructure that provides high interoperability withexternal applications and services, while the extraction of the control functions from networkdevices to a logically centralized controller allows network operators to directly program thenetwork’s intelligence with a global view.

Network traffic monitoring is a key element for network operators to manage networksefficiently. It provides critical information to the network intelligence to ensure the stabilityand security for network services and applications. This information can be used by manydifferent network management systems are as follows: Traffic Engineering System, QoSProvisioning System, Anomaly Detection Systems, etc. SDN paradigm enhances networktraffic monitoring functionality providing a global view of the network while inspecting alltransmitted flows in a fine-grained way.

In this Master’s thesis, an SDN-enabled environment is developed and deployed usingvirtualization tool Virtual Networks over linuX (VNX) to test network-wide SDN monitoringimplementations. VNX tool supports creation of large and complex virtual networks in aflexible way, thus reducing the requirements of physical resources thanks to virtualizationtechniques such as KVM and LXC. A performance analysis on different SDN-related softwareusing these virtualization techniques is performed. This environment allows users toquickly run and test their own SDN and NFV related developments such as SDN-Monframework: a scalable framework that provides network traffic monitoring in SDN ina flexible and fine-grained manner for controller applications. This framework achievesa flexible and efficient network-wide monitoring solution in SDN, overcoming challengessuch as the switch-controller communication overhead or the usage of the memory andcomputing resources of the network’s switches. Thanks to the introduced SDN-enabledvirtual environment, this above mentioned monitoring framework which have been takenas use case, is integrated and tested to analyze its benefits and drawbacks on large scalenetworks with different topologies and different deployment configurations.

Finally, this project goes beyond by extending the controller’s NorthBound interface,and then introducing new simple external applications that rely on information provided bySDN-Mon framework for security purposes. Therefore, this Master’s thesis not only proposesan SDN-enabled environment but also starts building a complete SDN solution, aiming atbecoming a base for future related works where SDN-provided monitoring information canbe used for more complex multiple-purpose applications that are proposed to the readers.

Keywords: Software Defined Networking, SDN, Monitoring, Virtualization, OpenFlow,Ryu, Lagopus, KVM, LXC, DPDK, VNX, REST API

Page 8: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 9: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

vii

Contents

Resumen iii

Abstract v

1 Introduction 1

2 Network monitoring with SDN 32.1 Software Defined Networking (SDN) . . . . . . . . . . . . . . . . . . 3

2.1.1 SouthBound Interface APIs: OpenFlow . . . . . . . . . . . . . 52.2 Network Traffic Monitoring: Key aspects . . . . . . . . . . . . . . . . 72.3 SDN-Mon Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3.1 Load Balancing adaptive mechanism . . . . . . . . . . . . . . 10

3 Software for SDN 133.1 Virtual Networks over linuX (VNX) . . . . . . . . . . . . . . . . . . . 13

3.1.1 LinuX Containers (LXC) . . . . . . . . . . . . . . . . . . . . . 143.1.2 Kernel-based Virtual Machines (KVM) . . . . . . . . . . . . . 14

Virtio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Vhost-net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.2 SDN controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2.1 POX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2.2 Ryu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.3 Virtual switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3.1 Open vSwitch (OVS) . . . . . . . . . . . . . . . . . . . . . . . 193.3.2 Lagopus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.4 Data Plane Development Kit (DPDK) . . . . . . . . . . . . . . . . . 203.4.1 Poll Mode Driver for virtio NIC . . . . . . . . . . . . . . . . . 223.4.2 Poll mode driver for emulated device e1000 . . . . . . . . . . . 24

4 SDN Traffic Monitoring Environment 274.1 Lagopus switch integration . . . . . . . . . . . . . . . . . . . . . . . . 27

4.1.1 DSL configuration . . . . . . . . . . . . . . . . . . . . . . . . 274.1.2 Raw-socket mode (KVM/LXC) . . . . . . . . . . . . . . . . . 294.1.3 DPDK mode (KVM) . . . . . . . . . . . . . . . . . . . . . . . 29

4.2 Ryu SDN controller installation . . . . . . . . . . . . . . . . . . . . . 314.3 Automation for environment’s deployment . . . . . . . . . . . . . . . 314.4 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.4.1 Virtual scenarios with VNX . . . . . . . . . . . . . . . . . . . 334.4.2 Standalone scenarios . . . . . . . . . . . . . . . . . . . . . . . 34

Page 10: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

viii

Simple scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 35Double scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 36Cascade-Triple scenario . . . . . . . . . . . . . . . . . . . . . . 37

4.4.3 Distributed scenarios . . . . . . . . . . . . . . . . . . . . . . . 38Double scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.4.4 Modified scenario for paper contribution . . . . . . . . . . . . 42

5 NorthBound Interface (NBI) 455.1 Ryu REST NBI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2 Extension for SDN-Mon . . . . . . . . . . . . . . . . . . . . . . . . . 465.3 Sample external applications . . . . . . . . . . . . . . . . . . . . . . . 51

5.3.1 Top-talkers applications . . . . . . . . . . . . . . . . . . . . . 535.3.2 Topology discovery . . . . . . . . . . . . . . . . . . . . . . . . 55

Discovery mechanism . . . . . . . . . . . . . . . . . . . . . . . 57Implementation in VNX . . . . . . . . . . . . . . . . . . . . . 58Alternative Python implementation . . . . . . . . . . . . . . . 59

5.4 SDN-Mon + Topology combo . . . . . . . . . . . . . . . . . . . . . . 595.5 Active flows discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6 Conclusion and Future Lines 636.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.2 Future Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

6.2.1 Anomaly detection . . . . . . . . . . . . . . . . . . . . . . . . 656.2.2 Traffic Engineering (TE) . . . . . . . . . . . . . . . . . . . . . 666.2.3 Network traffic measurement . . . . . . . . . . . . . . . . . . . 66

Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

6.2.4 Quality of Service (QoS) . . . . . . . . . . . . . . . . . . . . . 686.2.5 Network traffic visualization . . . . . . . . . . . . . . . . . . . 69

6.3 Packet generator. Enhanced performance tests . . . . . . . . . . . . . 706.4 Real use case: datacenter networks . . . . . . . . . . . . . . . . . . . 72

7 Appendix A: Lagopus configuration 777.1 start-lagopus-raw.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . 777.2 start-lagopus-dpdk.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

8 Appendix B: Root Filesystem generators 838.1 create-rootfs_sdn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838.2 create-rootfs_lxc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878.3 create-rootfs_lagopus_raw_kvm . . . . . . . . . . . . . . . . . . . . 928.4 create-rootfs_lagopus_dpdk_kvm . . . . . . . . . . . . . . . . . . . 92

Page 11: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

ix

9 Appendix C: Virtual scenarios 999.1 simple_lagopus_ryu_dpdk.xml . . . . . . . . . . . . . . . . . . . . . 999.2 double_lagopus_ryu_dpdk.xml . . . . . . . . . . . . . . . . . . . . . 1039.3 cascade_lagopus_ryu_dpdk.xml . . . . . . . . . . . . . . . . . . . . 1089.4 cascade_lagopus_ryu_dpdk_external_busy.xml . . . . . . . . . . . 1159.5 cascade-sw12-cnt.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219.6 cascade-sw34.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289.7 cascade-sw34-pkt.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

10 Appendix D: Distributed setup 14110.1 create-tunnel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14110.2 cleanup-tunnel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

11 Appendix E: User applications 14511.1 sdnmon_rest.py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14511.2 user_app_top_five.py . . . . . . . . . . . . . . . . . . . . . . . . . . 15411.3 topo_app.py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15811.4 active_flows.py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Bibliography 163

Page 12: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 13: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

xi

List of Figures

2.1 Software Defined Networking architecture . . . . . . . . . . . . . . . . 32.2 OpenFlow architecture [3] . . . . . . . . . . . . . . . . . . . . . . . . 52.3 OpenFlow flow table [2] . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 SDN-Mon architecture [5] . . . . . . . . . . . . . . . . . . . . . . . . 82.5 SDN-Mon entry structure . . . . . . . . . . . . . . . . . . . . . . . . 92.6 SDN-Mon global data [1] . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1 Virtual Networks over linuX [7] . . . . . . . . . . . . . . . . . . . . . 133.2 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3 KVM architecture [11] . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4 Virtio architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.5 Vhost architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.6 POX controller [16] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.7 Ryu controller [17] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.8 Open vSwitch [19] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.9 Lagopus switch [6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.10 DPDK architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.11 DPDK’s PMD for Virtio NIC . . . . . . . . . . . . . . . . . . . . . . 233.12 DPDK’s PMD for Vhost NIC . . . . . . . . . . . . . . . . . . . . . . 233.13 DPDK’s PMD for emulated device e1000 . . . . . . . . . . . . . . . . 25

4.1 Low-level view of a virtual scenario . . . . . . . . . . . . . . . . . . . 344.2 Network topology of simple scenario . . . . . . . . . . . . . . . . . . . 364.3 Network topology of double scenario . . . . . . . . . . . . . . . . . . 374.4 Network topology of cascade-triple scenario . . . . . . . . . . . . . . . 384.5 GRE tunnel endpoint configuration using OVS fake bridges . . . . . . 394.6 Network topology for distributed double scenario . . . . . . . . . . . 414.7 Low-level view of modified scenario with an external Ryu controller . 43

5.1 Composition of Ryu modules . . . . . . . . . . . . . . . . . . . . . . . 525.2 External applications talking to SDN-Mon . . . . . . . . . . . . . . . 535.3 Top-talkers display . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.4 Big router approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575.5 Flows movement detection . . . . . . . . . . . . . . . . . . . . . . . . 60

6.1 QoS system usage with more applications . . . . . . . . . . . . . . . . 696.2 ONOS GUI topology viewer . . . . . . . . . . . . . . . . . . . . . . . 706.3 Current implemented scenario with Pktgen . . . . . . . . . . . . . . . 716.4 Proposed future scenario with Pktgen . . . . . . . . . . . . . . . . . . 72

Page 14: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

xii

6.5 Comparison between Lagopus as virtual switch and as physical switch 736.6 Attacker inside a datacenter network . . . . . . . . . . . . . . . . . . 74

Page 15: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

xiii

List of Tables

4.1 Network performance metrics for simple scenario . . . . . . . . . . . . 364.2 Network performance metrics for double scenario . . . . . . . . . . . 374.3 Network performance metrics for cascade-triple scenario . . . . . . . . 384.4 Network performance metrics for distributed double scenario . . . . . 41

Page 16: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 17: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

xv

List of Abbreviations

API Application Programming InterfaceARP Address Resolution ProtocolCLI Command-Line InterfaceCPU Central Processing UnitDDoS Distributed Denial-of-serviceDPDK Data Plane Development KitDSL Domain-Specific LanguageEAL Environment Abstraction LayerEVPN Ethernet Virtual Private NetworkGMT Global Monitoring TableGUI Graphical User InterfaceGRE Generic Routing EncapsulationHTTP Hypertext Transfer ProtocolICMP Internet Control Message ProtocolIDS Interaction Detection SystemIP Internet ProtocolJSON JavaScript Object NotationKVM Kernel-based Virtual MachinesLAN Local Area NetworkLLDP Link Layer Discovery ProtocolLLDPDU Link Layer Discovery Protocol Data UnitLXC LinuX ContainerMAC Media Access ControlMPLS Multiprotocol Label SwitchingNAPT Network Address Port TranslationNAT Network Address TranslationNBI NorthBound InterfaceNIC Network Interface ControllerNFV Network Function VirtualizationNTT Nippon Telegraph and Telephone CorporationOF OpenFlowONF Open Networking FoundationOS Operating SystemOVS Open vSwitchOVSDB Open vSwitch DatabasePBB Provider-Backbone-BridgePMD Poll Mode DriverQCOW QEMU Copy On Write

Page 18: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

xvi

QoS Quality of ServiceRAM Random Access MemoryREST Representational State TransferRSVP Resource Reservation ProtocolRTT Round-Trip TimeSBI SouthBound InterfaceSDN Software Defined NetworkingSNAT Source Network Address TranslationTCP Transmission Control ProtocolTE Traffic EngineeringTTL Time to liveUDP User Datagram ProtocolURL Uniform Resource LocatorVLAN Virtual Local Area NetworkVM Virtual MachineVNX Virtual Networks over linuXVXLAN Virtual eXtensible Local Area NetworkWAN Wide Area NetworkWSGI Web Server Gateway InterfaceXML eXtensible Markup Language

Page 19: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

1

Chapter 1

Introduction

In January of 2017 I was granted with an Internship in the National Instituteof Informatics (NII) in Tokyo, Japan, in order to collaborate in an ongoing researchabout the new network paradigm: Software Defined Networking (SDN). This researchproject was carried on by the Fukuda Laboratory, directed by professor KensukeFukuda, and assisted by PhD candidate Phan Xuan Thien, with whom I workedfull-time for three months.

The topic of this research project was Software Defined Networking (SDN) fornetwork traffic monitoring. The goal of this novel research project is exploringa new network paradigm that decouples network control and forwarding planes,providing a flexible manageability for various network services. Introducing anefficient monitoring framework that relies on SDN (SDN-Mon framework), users willbe able to obtain and analyze critical network-related information in a fine-grainedway, thus enhancing the ability to manage used resources more efficiently and openinga new world of innovative network applications.

As of my arrival, the Fukuda Lab had only tested SDN-Mon for single-switchnetwork scenarios, although they wanted to test it in multiple-switch scenarios, theylacked of enough physical resources to do it. My contribution was the proposaland further utilization of an SDN-enabled virtual environment that allow deployingSDN-Mon in larger scale virtualized networks. This made possible the analysis of thenewest features introduced by this monitoring framework for multiple-switch networktopologies.

Using Virtual Networks over linuX (VNX) tool, multiple virtual scenarios havebeen deployed aiming at different purposes that goes from a single switch topologyfor SDN communications testing, to a multiple switch topology for performance andload balancing testing, to virtual scenarios distributed in various physical servers forscalability testing. These scenarios have been a good contribution to latest SDN-Monpublication, as they have allowed experimenting and evaluating SDN-Mon’s newestload-balancing feature.

This work has enriched the contents of the latest publication on SDN-Monframework greatly [1], so I have been honored being named collaborator of it, openingthe door to new collaborative articles about this research project in the future.

Page 20: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 21: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3

Chapter 2

Network monitoring with SDN

2.1 Software Defined Networking (SDN)Software Defined Networking (SDN), is a new network paradigm that decouples

the control plane from the data plane [2]. The logic of the control plane is subtractedfrom the network devices and is centralized into an element called the controller.This centralization of the network’s logic, allows network operators to program newfunctionalities for the network devices from a global view of the network.

The separation of the control plane from the data plane, introduces an abstractionlayer that results into the composition of a new network architecture that worksaround the SDN controller. This results in a scenario comprised of three layers: theapplication layer, the control layer, and the infrastructure layer.

Figure 2.1: Software Defined Networking architecture

In the so-called control layer, the SDN controller resides acting as a central elementin the architecture as depicted in Figure 2.1. Bellow the controller there is the

Page 22: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4 Chapter 2. Network monitoring with SDN

infrastructure layer, which is managed by the controller through the standardizedSouthBound Interface (SBI). The infrastructure layer is comprised of network devices,typically referred to as switches, whose functionalities are programatically configuredusing standard protocols supported by the SDN controller. Among these protocolsthe main reference is the OpenFlow protocol, which will be explained later inthis section. On the other side of the architecture, we find the application layerresiding on top of the controller. Such layer is the aggregation of external userapplications that communicate with the SDN controller through the NorthBoundInterface (NBI) for many different network-related purposes. A common example is anetwork operator that has an external application which talks to the SDN controllerto retrieve real-time information from the managed network devices; then a secondexternal application that exploits such information, and utilizing the SDN controllercapabilities, automatically programs certain network devices as needed.

As depicted in Figure 2.1 the SDN architecture relies on two separate APIs thatachieve a full workflow that goes from the external users to the programmable devicesof the managed network. In other words, these APIs work as the glue that connectsthe three presented layers in the architecture.

• SouthBound Interface (SBI): Located under the controller, this is an APIthat should employ standardized protocols for the SDN controller to establishsecure and efficient channels with the managed network devices. This channelwill be used by the SDN controller to program new functionalities on the networkdevices. The most used SBI is OpenFlow.

• NorthBound Interface (NBI): The purpose of this API is to provide aflexible interface through which external users can operate over the networkindependently of the used underlying technologies. This way, users areabstracted from the SDN infrastructure, allowing them to develop innovatingapplications that would orchestrate network operations efficiently. Due to itsflexibility and simplicity, the most common implementations of API for the NBIare using REST.

This new flexible network architecture that SDN paradigm proposes presents manyadvantages worth noting such as:

• Innovation: SDN introduces an abstraction layer over the underlyinginfrastructure. Such abstraction allows operators to build new networksapplications and services as they can treat network resources as virtual entitiesfrom a global point of view.

• Simple: SDN greatly simplifies the network devices themselves, since they nolonger need to understand and process different protocol standards but merelyaccept instructions from the SDN controllers.

• Agile: Network operators can quickly introduce new features in network devicessince they can develop these programs themselves and not wait for features tobe embedded in vendors’ proprietary and closed software.

Page 23: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

2.1. Software Defined Networking (SDN) 5

• Automation: Operators can use network applications that modify network’sbehavior in real-time depending on their needs, thus making an intelligentnetwork that adjust itself to new changes.

• Control: SDN provides a granular control on network resources, which allowsnetwork operators implement features such as traffic isolation or security.

• OPEX:The operation costs are greatly reduced. When a new device connects tothe network it is no longer needed a manual complex configuration, as with SDNthe device just needs a connection to the SDN controller, which will configurethe device accordingly. This also applies for maintenance operations that canbe solved programatically.

In summary, SDN introduces a new philosophy of networking that greatly reducesthe cost of network operations, but also brings new interesting features that solvetraditional problems in the management of networks.

2.1.1 SouthBound Interface APIs: OpenFlow

OpenFlow (OF) [3] is considered one of the first standard communications interfacedefined between the control and forwarding layers of the SDN architecture. Theconcept of OpenFlow began at Stanford University in 2008. Version 1.0 of theOpenFlow switch specification was released later on by the end of 2009. Currently,the Open Networking Foundation (ONF) manages the definition of OpenFlow. TheONF is user-driven organization dedicated to open standards and SDN adoption [4].

Figure 2.2: OpenFlow architecture [3]

OpenFlow originally defined the communication protocol in SDN environments thatenables the SDN Controller to directly interact with the forwarding plane of networkdevices such as switches and routers, both physical and virtual (i.e. running inside

Page 24: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6 Chapter 2. Network monitoring with SDN

hypervisors). OpenFlow simplifies the access to network devices through a standardinterface, allowing network operators to easily program an intelligent control layerfrom a central point of view, enhancing the network with a new rich set of features.

Figure 2.2 shows the proposed architecture for OpenFlow. As we can see it matchesthe already explained infrastructure layer of the SDN architecture. In OpenFlow, anSSL channel is established between the OpenFlow device and the SDN controllerin order to program new functionalities in the managed device. OpenFlow devicescontain a “Flow Table” which keeps a record of flow-based entries that define thebehavior of the device. OpenFlow can be interpreted as the instruction set of a CPU,where the protocol utilizes primitives that program the forwarding plane of a networkdevice, just like the instruction set of a CPU would program a computer system.

Figure 2.3: OpenFlow flow table [2]

OpenFlow uses the concept of flows to identify network traffic based on matchrules that can be statically or dynamically programmed by the SDN control software.As depicted in Figure 2.3, OpenFlow identifies network flows using common headerfields that can be found in a packet inspection such as MAC address, IP address ortransport protocols. These entries are configured with a particular action associated,which could be: forwarding the matched packet through a certain port, forwarding thepacket to the SDN controller, or even dropping the packet. For those readers familiarwith security applications, OpenFlow’s Flow Table could be compared to a firewalltable. OpenFlow defines how traffic should flow through network devices based onparameters such as usage patterns, applications, and cloud resources. Since OpenFlowallows the network to be programmed on a per-flow basis, an OpenFlow-based SDNarchitecture provides extremely granular control, enabling the network to respond toreal-time changes at the application, user, and session levels.

Page 25: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

2.2. Network Traffic Monitoring: Key aspects 7

2.2 Network Traffic Monitoring: Key aspectsNetwork traffic monitoring is a fundamental function that can be used to operate

and manage networks in a stable and efficient way. A good use of monitoring toolscan lead to a great improvement on networks’ performance.

Many network management systems such as the Traffic Engineering System, theIntrusion Detection System, the QoS Provisioning System, or the Billing System,require accurate network resources status information in real-time. In order toobtain such kind of information, a network traffic monitoring solution that is able todetermine the operated network’s health status in real-time is strictly necessary. Forinstance, in the case of a Traffic Engineering System, network monitoring should keepupdated measurements on relevant network related parameters such as the currentutilized bandwidth or the latency between nodes. An accurate updated measurementof such parameters is key for the successful functioning of the Traffic EngineeringSystem, which is necessary system for an efficient management of the network.

As detailed in the previous section, SDN paradigm introduces new ways ofmonitoring networks. The granular control that OpenFlow provides, allows networkmanagers to implement monitoring applications that benefit from a fine per-flowmonitoring mechanism in the network devices. In the case of OpenFlow-enablednetwork devices, network managers could specify new flow entries for certain networkflows to be monitored. When a packet hits a match for one of these monitoring flowentries, the flow entry’s associated counters are updated. In addition, such entrycould perform some particular actions such as forwarding the packet to a mirrordevice, although this is not the idea of monitoring with SDN.

In a nutshell, the OpenFlow network device would keep in its Flow Table a recordof entries for each network flow to be monitored. As we can see, this introduces anew approach of OpenFlow’s purpose. The idea of OpenFlow is programming theforwarding plane of network devices, but in this case the OpenFlow entries are beingused “just” for monitoring purposes, disregarding the forwarding behavior for thosematched flows. In large scale networks with high loads of traffic, this approach couldbecome a serious problem as the controller-switch overhead increases, and the size ofthe flow table is limited.

Therefore, SDN paradigm with OpenFlow provides a fine-grained network trafficmonitoring mechanism that has to be used wisely, since depending on the networkscale, SDN monitoring becomes a trade-off between accuracy and efficiency.

2.3 SDN-Mon FrameworkSDN-Mon is an SDN monitoring framework developed at the Fukuda Lab at the

National Institute of Informatics (NII) in Tokyo, Japan. SDN-Mon is a software

Page 26: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8 Chapter 2. Network monitoring with SDN

project that shows the benefits of using the SDN paradigm to improve the existingnetwork traffic monitoring applications [5]. SDN-Mon not only provides an SDNframework to monitor traffic, but also includes new enhanced mechanisms to addresscertain drawbacks present in the SDN paradigm:

• The high frequency of queries to retrieve flow statistics introduces a big overheadin communication channel between the controller and the network OpenFlowswitches.

• Inflexibility when monitoring is bound with forwarding into the same flow tablessince packet features for monitoring purposes not always match forwardingpurposes.

• There is a lack of scalability as the current capacity of existing hardware switchesis very limited in order to handle big numbers of flow entries monitored in alarge network.

Figure 2.4: SDN-Mon architecture [5]

SDN-Mon is a scalable framework that solves the above listed drawbacks, allowingusers to build applications that can benefit from an on top of it fine-grained andflexible monitoring framework. SDN-Mon decouples the monitoring functionalitiesfrom the forwarding functionalities, allowing the monitoring to be processedindependently. In addition, since SDN-Mon is focused on large-scale networks, anefficient sampling mechanism for the monitoring entries is present. This mechanism

Page 27: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

2.3. SDN-Mon Framework 9

will allow SDN-Mon to adapt to larger networks by not monitoring all the presentflows in the network.

Figure 2.4 shows the architecture proposed by the SDN-Mon framework. SDN-Monresides in both the SDN controller and the switch. In the controller-side thereis SDN-Mon module which, on one hand provides APIs for other SDN controllerapplications to access the monitored data; and on the other hand, a specific SDN-Monhandler which takes care of managing the SDN-Mon related communication withswitches, as well as encapsulating/decapsulating the monitoring messages.

In the switch-side, SDN-Mon modifies Lagopus software switch [6] to introduce anew module which, in a nutshell, is in charge of managing the monitoring functionalitythat has been decoupled from the forwarding functionality in the switch. This modulelistens on the OpenFlow channel in order to handle the SDN-Mon related requestscoming from the controller; as well as maintaining a local monitoring database of theswitch with the handled network flows. When using the sampling mechanism, themodule decides whether to monitor a new flow according to a user specified likelihoodvalue that goes from 0 to 1.

SDN-Mon implementation is a leveraged version of the OpenFlow protocol, byusing the available Experimenter extensions provided by such protocol. Theseextensions allow SDN-Mon to enhance the standard OpenFlow messages, carryingan OpenFlow header, a specific ID for each SDN-Mon module, a specific ID forSDN-Mon experimenter, and a message content.

Due to the use of OpenFlow protocol as the SDN communication enabler betweenthe controller and the switches, SDN-Mon uses the OpenFlow supported match fields.This makes the SDN-Mon monitoring database to be comprised of monitoring entrieswhich consist of a hash value of the entry, match fields, and counters [Figure 2.5].

Figure 2.5: SDN-Mon entry structure

Among the supported match fields by OpenFlow, SDN-Mon by default employs atotal number of five fields to monitor flows in the network:

• SRC_IP ⇒ Source IP address

• DST_IP ⇒ Destination IP address

• SRC_PORT ⇒ Source Port number

Page 28: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

10 Chapter 2. Network monitoring with SDN

• DST_PORT ⇒ Destination Port number

• PROTOCOL ⇒ Protocol number (according to OpenFlow reference)

SDN-Mon framework provides the flexibility to let users specify other OpenFlowmatch fields to monitor network flows depending on the needs of their SDNapplications. In order to measure the activity of the monitored network flows,SDN-Mon keeps a record of statistics for each flow in two counters: packet_count andbyte_count. These fields are updated in real-time by the Lagopus software switchwhen there is a hit to that particular monitored flow. Then, each entry is completedwith a hash value which is generated from the values of the chosen match fields. Thisvalue will be very useful as it uniquely identifies each monitored flow in a particularswitch.

Afterwards, the entries that are recorded into the SDN-Mon Monitoring Database,will be then sent periodically to the SDN controller inside an Experiment OpenFlowmessage. On its arrival to the SDN controller, the SDN-Mon controller-side moduledecapsulates the message, processes the contained information of the monitoredentries, and saves them into particular consumable format to be accessed by otherapplications in the SDN controller. Therefore, the monitoring entries saved bySDN-Mon become a very useful source of information for other SDN applicationsthat connect to this monitoring framework.

2.3.1 Load Balancing adaptive mechanism

Latest version of SDN-Mon aims at network-wide monitoring. This version extendsSDN-Mon framework by introducing an adaptive mechanism that prevents flowsmonitoring duplication and balances the monitoring load over switches in the network[1].

Figure 2.6: SDN-Mon global data [1]

Page 29: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

2.3. SDN-Mon Framework 11

Figure 2.6 shows the distribution of the tables required by SDN-Mon for theadaptive mechanism implementation. This mechanism utilizes the so-called GlobalMonitoring Tables (GMTs), which correspond to each managed switch in the network.Each GMT stores, updates, and manages the monitoring data of a switch in the samefashion as the original proposal (i.e. 5-tuple monitoring information). The SDN-Monmodule of the controller also employs a lightweight table called Switch Memory UsageTable, which keeps updated information on the memory usage for each managedswitch in the network. This information will be used by SDN-Mon for choosing theswitch that will monitor a particular network flow, achieving load balancing in theprocess. When a new flow appears in the network, all the switches that monitor suchflow add an entry to their monitoring tables, which is noticed by SDN-Mon when anupdate is triggered. In this situation, among the switches that monitor the new flow,SDN-Mon selects the switch that has lowest usage of memory as monitor of the newflow, while commands the remaining switches to remove the monitoring entry for thenew flow.

On the other hand SDN-Mon uses a Buffering Table and Removing Lists datastructures to contain temporary data that will help removing duplicate monitoringentries. By default, each managed switch in the network will try to monitor anew handled network flow. When an update is triggered, SDN-Mon receives newinformation from all the managed switches, where it could happen that multipleswitches want to monitor a new flow. Here is when these SDN-Mon temporary datatables come into action in order to keep the rejected monitoring entries of each switch.After SDN-Mon finishes the processing of an update and selects a particular switchbased on its memory usage, the controller sends an order to all the discarded switchesto remove such rejected monitoring entries. This way, SDN-Mon makes sure thereare no duplicate monitoring entries in the network.

Page 30: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 31: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

13

Chapter 3

Software for SDN

3.1 Virtual Networks over linuX (VNX)

Figure 3.1: Virtual Networks over linuX [7]

For the creation and configuration of the virtual SDN environment that hasmade this work possible, virtualisation software Virtual Networks over linuX (VNX)has been used. VNX software has been developed at the Telematics EngineeringDepartment (DIT) of the Technical University of Madrid (UPM). VNX is anopen-source tool that allows users to quickly deploy virtual networks and instanceswhere custom applications and services can be easily tested [7].

In order to deploy virtual environments, VNX relies on the generation of afilesystem, know as “root filesystem”, which acts as base image for the spawnedinstances. For the creation of these so-called root filesystems, the QEMU Copy OnWrite (QCOW) disk format is used, as it requires low disk space and it is able toexpand dynamically. For the provisioning of the instances in a virtual environment,VNX supports LinuX Containers (LXC) as well as Kernel-based Virtual Machines(KVM) as virtualization solutions. These two solutions will be explained in moredetail at the end of this section.

Another interesting feature that VNX tool provides is the ease of deploying customnetwork architectures. VNX just needs a single Extensible Markup Language (XML)format file in which the whole network with its characteristics is defined. The XML fileis basically a human-readable template that allows the users quickly specify customnetwork topologies as well as the configuration and characteristics of the servers that

Page 32: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

14 Chapter 3. Software for SDN

are connected. This file is processed by VNX, generating a new XML file that can beconsumed by libvirt library, which is in charge of managing virtualization technologiessuch as LXC, KVM, Xen, VMWare ESX or UML. After processing the XML file,libvirt coordinates with the corresponding hypervisor in order to spawn and configurethe all the instances that have been specified by the user.

For the simulation of networks to which these instances are connected, VNX relieson Linux Bridge [8]. This solution achieves a friendly and lightweight interconnectionof virtual instances, simulating a custom network topology that has been specified bythe user in the XML template.

3.1.1 LinuX Containers (LXC)

Linux Containers is an operating-system-level virtualization mechanism that allowsrunning multiple containers on a host by sharing its Linux kernel [Figure 3.2]. Inorder to provide isolation between containers running on a same host, LXC utilizesLinux kernel’s cgroups functionality, for the allocation of resources; and the namespacefeature, which makes each container have a unique view of the resources in the system.

Figure 3.2: Containers

These allocated resources can go from mount points to network interfaces. Theresult is a container, which can be seen as a lightweight virtual machine that can begenerated/destroyed quickly as it does not virtualize a whole operating system likea virtual machine does. A good example of use of Linux containers is the popularcontainer technology Docker [9], which enhances LXC backend.

3.1.2 Kernel-based Virtual Machines (KVM)

Kernel-based Virtual Machine is a full virtualization solution for Linux on x86hardware [10][11]. Depending on the processor vendor, KVM requires hardwarevirtualization extensions, such as Intel VT or AMD-V, in order to connect to theguest OS. It consists of a loadable kernel module, kvm.ko, that provides the core

Page 33: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3.1. Virtual Networks over linuX (VNX) 15

virtualization infrastructure and a processor specific module. This module was mergedinto the Linux kernel mainline in kernel version 2.6.20.

KVM turns Linux systems into a hypervisor, where users can spawn multiple virtualmachines running a different operating system image such as Linux or Windows. Eachof these virtual machines has private virtualized hardware: a network card, disk,graphics adapter, etc. The virtualization of hardware is not performed by KVM, butby QEMU [12] which emulates hardware features for virtual machines and coordinateswith KVM to provide a complete virtualization solution [Figure 3.3].

Figure 3.3: KVM architecture [11]

Virtio

Using the full-virtualization of an operating system will show low performance asthe hypervisor has to emulate actual physical devices like network cards, which iscostly and inefficient. This is why opting for a para-virtualization solution is a betteroption.

In order to achieve this type of virtualization, some changes have to be done inthe guest OS to make it aware of being virtualized. This is done by introducingdrivers that act as the front-end, while the hypervisor implements back-end driversfor each particular device emulation [13]. As it has been described before, such deviceemulation occurs in the host’s user space using QEMU, hence these backend driverscommunicate into the user space of the hypervisor to facilitate I/O through QEMU. Inthis scenario, behind the scenes the control plane and the data plane for the emulated

Page 34: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

16 Chapter 3. Software for SDN

device are performed in QEMU in the user space. This way, the hypervisor can exporta common set of emulated devices and make them available to the guest OS through acommon API. In such scenario, Virtio comes in to provide a standard interface (API)for the set of emulated devices.

Figure 3.4: Virtio architecture

Virtio is a virtualization standard for network and disk device drivers where just theguest’s device driver "knows" it is running in a virtual environment, and cooperateswith the hypervisor [14]. This kind of virtualization enables guests to get highperformance network and disk operations, and gives most of the performance benefitsof para-virtualization. As depicted in [Figure 3.4],Virtio is structured in three maincomponents, where are worth noting the Virtio Driver allocated as kernel module inthe Guest OS, and the Virtio Device allocated as a device in QEMU.

Vhost-net

Vhost-net introduces a networking performance enhancement by moving the Virtioback-end driver from QEMU user space into the kernel [Figure 3.5]. This allowsnetwork device emulation code to directly call the kernel subsystems instead ofperforming system calls from the user space. With vhost, QEMU is still used in thisarchitecture since vhost does not emulate a complete Virtio PCI, but only virtqueue,which is a chunk of memory shared between QEMU and the quest to acceleratedata access [15]. In this case the control plane of the device is managed in the userspace, while the data plane is now managed in the host’s kernel. Overall, this newarchitecture achieves a reduction on copy operations, latency and CPU usage.

Page 35: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3.2. SDN controllers 17

Figure 3.5: Vhost architecture

3.2 SDN controllersCurrently there is a big number of SDN controllers available in the market. There

are already well established SDN controllers such as ONOS, which is focused inservice providers with large WAN networks; or OpenDaylight, which is a multipurposeSDN controller that provides a high grade of interoperability. However, this kind ofcontrollers have been discarded since they introduce too many features for a smalllaboratory network like this work presents. In this section some of the availablecontrollers that adjust the best to our needs are listed.

3.2.1 POX

POX was created from what is considered the original SDN controller, NOX. POXis an open source controller that has been programmed in Python 2.7, and runs onLinux, Windows and MacOS systems.

Figure 3.6: POX controller [16]

Page 36: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

18 Chapter 3. Software for SDN

POX provides a “Pythonic” OpenFlow interface, which allows users to quicklylearn the fundamentals of this protocol, and how it is used to program SDN-enabledswitches of a network. POX presents a good set of well defined APIs than can be easilyused as documented in its wiki [16]. In addition, POX inherits interesting modulesfrom NOX such as the Topology discovery and the GUI and visualization tools.However, POX does not provide a native Northbound Interface API. In summary,POX is the recommended choice for beginners to quickly program SDN applications,although for more complex projects, the capabilities of this controller are very limited.

3.2.2 Ryu

Ryu controller, developed at NTT, is an SDN controller that has been completelywritten in Python [17]. Ryu provides software components with well defined APIthat make it easy for developers to create new network management and controlapplications. Unlike POX controller, Ryu provides native Northbound InterfaceREST APIs, but it also contains a Python package to easily develop custom RESTAPIs for new SDN modules developed by the users.

Figure 3.7: Ryu controller [17]

The Ryu Controller supports NETCONF, OF-config, and OVSDB networkmanagement protocols, as well as OpenFlow versions v1.3 and v1.4. Ryu has beenwell tested and certified to work with a long list of OpenFlow switches such as OpenvSwitch, Lagopus, as well as products from vendors such as HP, IBM, NEC orPica8. The full list of certified switches can be found at [18]. Moreover, Ryu isofficially integrated into OpenStack Networking service (Neutron), which brings newinteresting features to a datacenter network. The Ryu Controller is supported byNTT and is deployed in NTT cloud datacenters as well.

The rich set of supported protocols, alongside a friendly framework entirely writtenin Python, makes Ryu the perfect choice for this project.

Page 37: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3.3. Virtual switches 19

3.3 Virtual switches

3.3.1 Open vSwitch (OVS)

Open vSwitch is an open source production quality, multilayer virtual switch [19].It is designed to enable massive network automation through programmatic extension,while still supporting standard management interfaces and protocols such as NetFlowor CLI. Open vSwitch was created by Nicira, which was later acquired by VMware.

Figure 3.8: Open vSwitch [19]

Open vSwitch uses OpenFlow and the Open vSwitch Database (OVSDB)management protocol, which allows OVS to work both as a soft switch runningwithin a datacenter hypervisor, and as the control stack for switching silicon. Itis the default switch in XenServer 6.0, the Xen Cloud Platform and also supportsXen, KVM, Proxmox VE and VirtualBox. It has also been integrated into manyopen source cloud software like OpenStack, openQRM, or OpenNebula. The kerneldatapath is distributed with Linux, and packages are available for Ubuntu, Debian,Fedora and openSUSE. Open vSwitch is also supported on FreeBSD and NetBSD.The last versions of Open vSwitch support working in DPDK mode. Due to itsmassive use in open source projects, Open vSwitch has become the de facto opensource virtual switch.

3.3.2 Lagopus

Lagopus is an OpenFlow 1.3 software switch implementation that leveragesDPDK for high-performance packet processing to realize software-defined networking.Lagopus software can run on Ubuntu, CentOS, and RedHat Linux distributions. Itis also supported on FreeBSD and NetBSD.

Figure 3.9: Lagopus switch [6]

Page 38: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

20 Chapter 3. Software for SDN

Lagopus software switch is designed to achieve over-10-Gbps packet processingthroughput and forwarding performance on a standard PC server for flexible andcomplex traffic flow handling. Lagopus software switch can be controlled by one ormultiple SDN controllers using standard OpenFlow 1.3 protocol. In addition, Lagopussupports hybrid SDN architectures as it can inter work with the existing routing stack,such as Quagga. Lagopus software switch supports running in stand-alone mode withLayer-2, and Layer-3 configuration.

Lagopus software switch provides flexible and programmable data plane forcommonly-used network packet frame format. Among these packet frameformats, Lagopus supports VLAN, IPv4, IPv6, ARP, ICMP, UDP, or TCP; andin particular, telco-friendly protocols such as QinQ, MAC-in-MAC, MPLS, orProvider-Backbone-Bridge (PBB). Lagopus software data plane also provides generaltunnel encapsulation and decapsulation for overlay-networking for datacenter andwide-area network with technologies such as VXLAN, EVPN extension and IPSectunnels. Like the Ryu controller, Lagopus software has been developed by NTT,which makes them the best partners for an SDN deployment.

3.4 Data Plane Development Kit (DPDK)The Data Plane Development Kit, commonly known as DPDK, is a set of libraries

and network interface controller drivers for fast packet processing. DPDK provides asimple, complete framework for fast packet processing in data plane applications [20].DPDK flexibility allows users to implement their own protocol stacks.

The DPDK framework employs an Environment Abstraction Layer (EAL) to createsets of libraries which are specific to environments. EAL is responsible for gainingaccess to low-level resources such as hardware and memory space. Therefore, thechosen EAL will depend on certain characteristics such as the mode of the processorsarchitecture (32-bit or 64-bit), or Linux user space compilers. Once the EAL libraryis created, the user will be able to link with the library to create their own networkapplications that will access the system’s NICs efficiently.

DPDK requires the allocation of resources before starting Data Plane applications.Host’s network cards interfaces are decoupled from the Linux Kernel, and passed tobe managed by DPDK [Figure 3.10]. Bypassing the Linux Kernel highly improvesthe packet processing performance in two aspects: firstly, the packet avoids passingthrough the kernel’s network stack. Secondly, the performance penalty introducedby associated context switching and packet copying operations when a system call isperformed for the packet to go from the kernel space to the user space.

The key aspect of this model is avoiding the traditional scheduling mechanism forpacket processing, which is substituted by a polling mechanism for all the devicesmanaged by DPDK. The use of a traditional scheduler presents a low level of

Page 39: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3.4. Data Plane Development Kit (DPDK) 21

performance since every packet that arrives to the host has to be processed througha system interruption. When working at high packet rates, the overhead introducedby each interruption can be a major impact.

Figure 3.10: DPDK architecture

In DPDK this problem is solved using the Poll Mode Drivers (PMD), where insteadof waiting for packets to generate interruptions, it is PMD that periodically checkseach device for new packets. This mechanism is handled by a specific PMD for thesystem, which performs an infinite loop that iterates over the DPDKmanaged devices.However, as expected, this loop is very costly since it requires completely assigninglogical cores to this function. This means, the configuration of the PMD has to bedone wisely, as it could lead to a very inefficient use of resources.

PMD can be configured to work in one of two possible modes: the run-to-completionmode, and the pipe-line model.

• Synchronous run-to-completion model: each DPDK-assigned logical coreruns a packet processing loop that executes the following steps: retrieving inputpackets through PMD receiver API, processing each received packet at a time,and sends the pending output packets through PMD transmitter API.

• Asynchronous pipe-line model: some logical cores are assigned to packetretrieval, while others to packet processing. In order to exchange receivedpackets, the assigned logical cores use shared rings. In the packet retrieval loop

Page 40: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

22 Chapter 3. Software for SDN

input packets are retrieved through PMD receiver API, and later on, providedto processing logical cores through packet queues. On other hand, the packetprocessing loop retrieves received packets from the packet queues, and processesthem up to its retransmission if forwarded.

There is a particular PMD for each supported network interface controller, whichcan be classified into physical drivers (for real devices) and virtual drivers (foremulated devices). Depending on the specifications of each card, the supportedfeatures by DPDK may vary. A complete list that covers all the available networkingdrivers with their supported features in DPDK can be found at DPDK official wiki[21].

In a nutshell, DPDK provides an environment that allows network applicationsrunning in user space to jump directly to the physical ports of the host, achievingvery high speeds in the packet processing.

3.4.1 Poll Mode Driver for virtio NIC

DPDK provides Poll Mode Driver for a great variety of network cards from vendorssuch as Chelsio, ARK, or Broadcom. However, finding network cards that supportDPDK is not very usual among standard desktop PCs. Nevertheless, utilizingKVM para-virtualization with Virtio allows running DPDK-based applications withstandard network cards. In this section, the Virtio NIC driver is taken as an exampleto better understand how PMD achieves fast communication between VMs, and VMto host. The PMD for Virtio NIC example will show a more general use case as thisdriver skips the hardware dependency seen in other drivers supported by DPDK.

Page 41: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3.4. Data Plane Development Kit (DPDK) 23

Figure 3.11: DPDK’s PMD for Virtio NIC

Figure 3.12: DPDK’s PMD for Vhost NIC

Page 42: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

24 Chapter 3. Software for SDN

Comparing Figure 3.11 and Figure 3.12 with the architecture analyzed in section1.2.2 for virtio and vhost, we can see some improvements when using DPDK’s PMDfor virtio. In both scenarios it is shown the advantage of using DPDK technologyinside the guest VM, as the virtio vring is moved from the kernel space to the userspace [22]. As explained previously, this modification highly improves the packetprocessing performance as the guest OS kernel’s network stack is avoided. Regardingthis aspect, the difference between these two scenarios is that in virtio-net QEMU’svirtio-net device talks to the vring from the user space, while in vhost it is thevhost-net device who talks to the vring from the host’s kernel space.

On the other hand, the DPDK applications that runs on guest VM’s user space,can now use the virtio PMD to directly access the KVM driver in the host’s kernelspace. Prior to DPDK integration, it was the emulated virtio driver residing in theguest’s kernel space, that linked the guest’s user space application with the KVMdriver of the host’s kernel space.

In summary, DPDK PMD for virtio introduces some improvements over thetraditional virtio setups, while adding support for both legacy and DPDKapplications, however, there is still bad performance. These bad results are causedby many VM transitions and the switching contexts presents in this architecture.

3.4.2 Poll mode driver for emulated device e1000

An alternative to the virtio driver is using drivers for emulated devices.Virtualization with QEMU provides a full emulation of compatible devices, bringingthe opportunity to play with DPDK applications by running them inside a VM.DPDK EM poll mode driver allows working with some emulated network devices, inparticular, the qemu-kvm emulated Intel® 82540EM Gigabit Ethernet Controller,also known as qemu e1000 device.

For an emulated qemu e1000, DPDK community highly recommends using a TAPnetworking backend in the host as the qemu network device backend. This will createa TAP networking device in the host, which offers very good performance as well asthe flexibility to configure it to create virtually any type of network topology.

Page 43: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

3.4. Data Plane Development Kit (DPDK) 25

Figure 3.13: DPDK’s PMD for emulated device e1000

However, it has to be noted that the emulated device e1000 presents somelimitations addressed by DPDK community in the following list:

1. The Qemu e1000 RX path does not support multiple descriptors/buffers perpacket. Therefore, rte_mbuf should be big enough to hold the whole packet,e.g. Jumbo frames.

2. Qemu e1000 does not validate the checksum of incoming packets.

3. Qemu e1000 only supports one interrupt source, so link and Rx interrupt shouldbe exclusive.

4. Qemu e1000 does not support interrupt auto-clear, application should disableinterrupt immediately when woken up.

Page 44: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 45: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

27

Chapter 4

SDN Traffic Monitoring Environment

In order to test the SDN-Mon framework in a multiple-switch SDN network a greatset of physical resources is needed. The ideal scenario would be installing a Lagopusswitch software on separate servers acting as real SDN-enabled switches, but thatrequires a big amount of physical servers. This work proposes a virtualized solutionusing virtualizing tool VNX, that allows deploying lightweight virtual scenarios tosimulate a real SDN-enabled network. The result is a virtual environment whereSDN-Mon traffic monitoring research project can be easily tested. During this section,the integration and deployment of all the pieces that compose such virtual SDN trafficmonitoring environment are discussed.

4.1 Lagopus switch integrationLagopus switch integration in the proposed virtual environment is one of the

most important components as SDN-Mon framework is an extension of this virtualswitch software. In this section the most relevant aspects regarding Lagopus switchintegration into a VNX-powered virtual environment are introduced. Among suchaspects, Lagopus configuration using predefined DSL files, or the two possible workingmodes raw-socket and DPDK, are covered [23].

4.1.1 DSL configuration

Lagopus can be configured on boot time using a predefined DSL formatconfiguration file. In such file all necessary parameters for Lagopus to work arespecified. Among these parameters users may find a required configuration in orderto use the physical interfaces of the node, the connection settings to single ormultiple SDN controllers, and the configured bridges that perform the forwardingrules. Virtual switch software such as Open vSwitch or Lagopus are based on thecreation of bridges in a similar fashion as the legacy Linux Bridges, but with manymore advanced features as well as an SDN-enabled control. Since this work tries tosimulate the behavior of a Lagopus switch in a real physical box, we will use just asingle internal bridge for the entire box. Keep in mind for future works that Lagopusallows users to play by combining multiple bridges with different configurations asneeded. The following lines of code show an example DSL configuration file forLagopus, in which a single bridge is connected to an SDN controller, and is mappedto the physical interfaces eth2 and eth3 of the box working in raw-socket mode.

Page 46: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

28 Chapter 4. SDN Traffic Monitoring Environment

log -file /var/log/lagopus.log -debuglevel 0 -packetdump ""

interface interface01 create -type ethernet-rawsock -device eth2interface interface02 create -type ethernet-rawsock -device eth3

port port01 create -interface interface01port port02 create -interface interface02

channel channel01 create -dst-addr 192.168.1.1 -dst-port 6633 -protocol tcp

controller controller01 create -channel channel01 -role equal -connection-typemain↪→

bridge bridge01 create -dpid 1 -controller controller01 -port port01 1 -portport02 2 -fail-mode secure↪→

bridge bridge01 enable

Additionally, Lagopus provides a CLI called “lagosh” which allows configuringLagopus interactively. This CLI implementation could be interpreted as OpenvSwitch’s popular CLI called ovs-vsctl [24]. The difference is that Lagopus CLI followsthe philosophy of other network device vendors such as Cisco, in which user entersan interactive CLI that can switch between multiple configuration modes. UnlikeOpen vSwitch’s implementation, Lagopus CLI makes automating the creation of anew configuration an impossible task due to its interactive nature. In the case of OpenvSwitch, the creation and configuration of new virtual switch can be automated bydirectly running CLI commands.

Currently, VNX does not include native support for configuring Lagopus in itsXML format syntax unlike Open vSwitch. In order to automate the configurationof Lagopus during startup process, a Bash script has been developed. Such scriptgenerates a complete DSL configuration file for Lagopus by taking the followingparameters as inputs: the SDN controller address, specifying its transport protocol,IP address, and port number; an integer value for the datapath ID of the switch; anda list with names of the interfaces of the virtual machine that will be used by Lagopusinternal bridge. The following example execution connects Lagopus internal bridgeto an SDN controller at tcp:192.168.1.1:6633, using data path ID 1, and taking eth2and eth3 interfaces of the virtual machine.

./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 1 -i eth2 -i eth3

Using above introduced command, Lagopus obtains a ready-for-use DSLconfiguration file in the virtual scenario. This way, the configuration process of

Page 47: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.1. Lagopus switch integration 29

the virtual switch is fully automated. The code of the Bash script can be foundat Appendix 7 depending on whether Lagopus works in raw-socket or DPDK moderespectively.

4.1.2 Raw-socket mode (KVM/LXC)

Raw-socket mode is the straightforward working mode in Lagopus, where ports ofthe internal bridges are simply mapped to physical interfaces of the box. Typically,this box can be seen as any bare metal server or virtual machine. Raw-socket modeis the more simple option for Lagopus as the configuration does not require specialhardware nor DPDK-supported NICs.

Due to its easy integration, raw-socket mode allows Lagopus to run on LXCcontainers. Running Lagopus on lightweight containers allows creating virtualscenarios that require a low number of resources. However, this working mode hasonly been designed for testing OpenFlow controller applications, not for real use casenetwork applications. Therefore, raw-socket mode is a good step for beginners to startworking with Lagopus, but for advanced applications, DPDK mode is the suitableconfiguration.

4.1.3 DPDK mode (KVM)

For real use cases, where performance and scalability are major factors, DPDKmode is the right configuration for Lagopus. Lagopus software switch leveragesDPDK for high-performance packet processing and forwarding. As studied in previoussections, DPDK technology moves both packet processing and NIC device handlingfrom user space to kernel space. This decoupling process of the NICs from thekernel space will require a special configuration. In order to configure DPDK onthe machine’s environment, there is a total of three required steps:

1. Load DPDK-provided kernel modules, igb_uio and rte_kni, to realize user spaceNIC drivers and network packet processing

2. Make huge pages available to DPDK by reserving 256 pages of 2 MB huge pages.

3. Unbound NICs from system’s kernel to be used by DPDK in the user space.This is achieved using DPDK-provided script dpdk-devbind.py

After setting an environment with above listed changes, and generating a suitableDSL configuration file for Lagopus, then the virtual switch would be ready for use inDPDK mode.

Nevertheless, DPDK working mode presents some peculiarities that complicates theintegration in a VNX-powered virtual environment. DPDK-enabled Lagopus does notsupport running on LXC containers unlike the raw-socket mode. Hence, in order tovirtualize with VNX a box that runs DPDK-enabled Lagopus, it is only possible

Page 48: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

30 Chapter 4. SDN Traffic Monitoring Environment

by relying on KVM. This introduces a major drawback on the virtual environmentdeployment, as DPDK-enabled Lagopus will run on KVM virtual machines insteadof LXC containers, which is a lot more resource-consuming. Moreover, runningon a virtual machine introduces an extra virtualization layer that will reduce theforwarding performance.

For virtualization with KVM, VNX uses QEMU to emulate the virtual NIC of aspawned virtual machine. In concrete, VNX configures QEMU by default to emulateIntel® 82540EM Gigabit Ethernet Controller, as known as e1000 NIC. As seen inprevious sections, the emulation of the e1000 NIC provides support for DPDK-enabledapplications in virtual machines. The result is a virtual environment that containsKVM virtual machines that run DPDK-enabled Lagopus software, in which Lagopuswill utilize DPDK’s EM PMD for the emulated e1000 NICs.

One of the most important aspects of Lagopus DPDK implementation is that itallows working with a great variety of configurations available, depending on user’sneeds and hardware limitations. The right election of the running configuration willdetermine whether Lagopus switch works correctly or works at its best depending onthe underlying hardware. The main idea of these configurations is the assignationof the system’s CPU cores to DPDK application. There are many possiblecombinations depending on users’ needs, but typically some CPU cores are assignedto I/O operations in reception and transmission, while others are assigned to packetprocessing.

Due to hardware limitations, the proposed virtual environment uses KVM virtualmachines with the minimum requirements for DPDK-enabled Lagopus to work.According to Lagopus official wiki, the minimum requirements are at least 2 cores,and at least 2 GB of RAM. Hence, there are only two available cores to be assigned.For this virtual environment, Lagopus runs using the following configuration:

lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assign balance

Lagopus switch command options can be classified into three groups: global, IntelDPDK, and datapath. According to the above introduced configuration, for theIntel DPDK option group Lagopus uses CPU cores 1 and 0, as “c3” represents a hexbitmask of the CPU ports to use: 0x11. In addition, the “n2” options that the usedCPU supports dual memory channel. Regarding the datapath group, option “p3”declares the hexadecimal bitmask of ports to be used, which in this case is port 1and port 0: 0x11. The assignment of CPU cores to the available ports is performedusing the balance policy. Last but not least, as global options for the execution, theexample configuration redirects output logs to /var/log/lagopus.log file, and runs theprocess as a daemon.

Page 49: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.2. Ryu SDN controller installation 31

4.2 Ryu SDN controller installationRyu SDN controller can be easily integrated into the proposed virtual SDN

environment as it can run on LXC. Moreover, its installation is straightforward byjust downloading a tar file that contains all necessary packages for Ryu to work. Itsexecution is very simple as it only requires as argument the SDN modules that weare interested in.

It is worth noting that the order of modules in the command execution is extremelyimportant when running Ryu. The list of modules is interpreted as a prioritized list,which means that if there are some functionalities overlapping such as OpenFlowevents, only the module with higher priority will take care of such functionality.In other words, lower priority modules whose functionalities are overlapped, will beshadowed.

For our virtual environment we have developed a simple Bash script start-ryu.shthat makes Ryu utilize the sample l2switch module by default, but optionally theuser may introduce a custom list of SDN modules to be used by Ryu. Thus, when thevirtual scenario is deployed, users will just need to run this script, and a Ryu SDNcontroller will be listening for new OpenFlow devices.

#!/bin/bash

## Script: start-ryu.sh#

if [ -z "$1" ]then

BASE="/usr/local/lib/python2.7/dist-packages/ryu"APP="${BASE}/simple_switch_13.py"echo "Running Ryu's sample l2 switch application ..."

elseAPP="$1"echo "Running custom application $1"

fi

ryu-manager --log-file /var/log/ryu.txt --verbose $APP

4.3 Automation for environment’s deploymentIn order to generate a new custom filesystem that will be used as base image for

virtual machines, VNX creates a temporary LXC/KVM machine where the necessarychanges are applied, and then saved into the filesystem for future use.

Page 50: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

32 Chapter 4. SDN Traffic Monitoring Environment

For an easy portability among different systems, the deployment of the environmenthas been automated. In particular, the Lagopus installation process has been fullyautomated, which is the element that has the bigger impact in the environment’sdeployment. This was achieved by compiling the code just once, during the generationof the filesystem, and to do so it was necessary to inform KVM to utilize ahost-passthrough mode virtualization. The result is a new generated filesystem forLagopus that contains an already code compiled version of Lagopus plus SDN-Monextension, which greatly speeds up the deployment of virtual scenarios. It is worthnoting that this is not the default behavior of VNX for filesystems generation, asusing a host-passthrough mode implicates passing all the CPU features to the virtualmachine, which in most cases is not necessary as well as inefficient.

The reasoning of using such host-passthrough mode is that Lagopus’s DPDKimplementation requires the SSSE3 CPU flag. Thus, when VNX generates thefilesystem, it creates a temporary KVM machine that possesses such CPU flagenabled, so the Lagopus DPDK code can be compiled and saved into the resultingfilesystem. Basically this procedure will create a filesystem that can be used to quicklyspawn machines that contain Lagopus DPDK-enabled code ready to run. Prior tothis improvement on the automation process for a scenario deployment, VNX spawnedmachines containing Lagopus DPDK-enabled code, and later on, the user had to gomachine by machine compiling the code.

There is a set of Bash scripts in charge of generating the required filesystems forthis SDN virtual environment: one for end hosts, one for the Ryu SDN controller, andthree for Lagopus switches (DPDK mode on KVM, raw-socket mode on KVM,andraw-socket mode on LXC). The code of these Bash scripts may be found at Appendix8.

4.4 ScenariosA set of virtual scenarios with different network topologies have been developed.

Such variety of scenarios has allowed testing SDN-Mon framework algorithms, as wellas some initial performance tests. Regarding the latter, it has to be noted that theseare virtualized scenarios, which means the performance results will differ from theresults obtained with real physical scenarios.

Depending of the available physical resources, two types of virtual scenarios havebeen developed: standalone and distributed. Depending on the size and requirementsof the deployed scenario, virtualization in a single physical server may or may not beenough. For this work a 8GB RAM and 8-core Intel i7-2600 CPU @ 3.40GHz desktophas been used. End hosts and Ryu SDN controller runs on LXC, hence, they arelow-resource consuming; however, Lagopus DPDK-enabled is based on KVM, whichneeds many more resources than a LXC container. For the minimum functioning ofLagopus, at least two virtual cores and two 2GB of RAM are required. This limited

Page 51: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.4. Scenarios 33

the development of virtual scenarios in a single desktop to network topologies with amaximum of three Lagopus switches. For network topologies with a higher numberof Lagopus switches, a distributed configuration for deploying virtual scenarios wasimplemented. This configuration will be seen in more detail in section 4.4.3.

4.4.1 Virtual scenarios with VNX

One of the most important elements in the virtualization of scenarios with VNXis the use of Linux Bridges in order to emulate networks that interconnect thespawned virtual machines and/or containers. In our use case this approach is notcompletely suitable as emulating direct links among switches, or switches and endhosts, is an inefficient solution because we would be using network bridges to emulatepoint-to-point links.

Using a Linux Bridge to emulate a network is a good practice as it providesan easy way to interconnect multiple devices in a virtual environment. In thescenarios that this SDN virtual environment includes, not all the connections amongdevices can be interpreted as networks. As we will show in detail in the upcomingsections, there are many connections between Lagopus switches and end hosts thatare simple point-to-point links. In such cases, a virtual Ethernet pair (veth) is moresuitable solution. Using veth technology to simulate point-to-point links improves theperformance as there are no forwarding logic and additional functionalities triggeredin process, which take place in a Linux Bridge. Veth simply establishes a virtual cablewith two virtual devices as endpoints. Thus, veth not just enhances the performance,but it also reduces the complexity of the virtualized network as it resembles more toa real network.

Page 52: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

34 Chapter 4. SDN Traffic Monitoring Environment

Figure 4.1: Low-level view of a virtual scenario

Unfortunately, VNX currently does not support an interconnection of virtualdevices using veth pairs. Figure 4.1 represents a very simple SDN-enabled virtualscenario with a single Lagopus switch, a Ryu SDN controller, and two end hostsconnected to the switch. The figure shows how point-to-point links such as h1-Lagopusor Lagopus-h2 have been implemented as Linux Bridges that receive the names link1and link2 respectively. As seen from the picture, in these cases using a veth pair wouldhave been a much more efficient and cleaner solution. Nevertheless, it is expected thatVNX supports the veth technology in the near future.

On the other hand, the figure also shows how a Linux Bridge is “correctly” used tosimulate the MgmtNet network. This is a network that provides management accessto network devices in the scenario from the host (which is represented as a personalcomputer). Additionally, it is worth noting that in this scenario the virbr0 LinuxBridge is represented as well. This bridge is created by default on Linux Bridgeinstallation and is very useful as it provides each virtual machine/container accessto external networks. Hence, the virbr0 bridge cannot only emulate a network, butprovides a NAT to the virtual machines/containers of the scenario to communicatewith the Internet.

4.4.2 Standalone scenarios

Following the approach introduced in the previous section, a set of virtual scenariosthat can run on a single physical network has been developed. Each scenario has

Page 53: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.4. Scenarios 35

been possible to deploy in two different version for the two available working modesin Lagopus: raw-socket mode and DPDK mode. These scenarios have been designedwith linear network topologies in order to avoid network loops as well as simplify theforwarding requirements.

In addition, each presented scenario will show the following performance metrics:

• Round-Trip-Time (RTT): Time taken by an ICMP packet to reach thereceiver and get back to the sender. Measured with Linux built-in ping tool.

• Throughput: Effective number of bits per second sent from and end host toanother. Measured with iperf tool using TCP packets and taking the averagevalue after a 60-second run.

In all of the developed scenarios, these above mentioned network metrics aremeasured by sending end-to-end traffic. In the case of RTT, the first user will ping thelast user that may be found in the network topology; while for the throughput metric,the first user is configured as an iperf client, and the furthest client is configured asan iperf server.

Simple scenario

This scenario presents the simplest network topology, in which two end hosts h1and h2 are connected to virtualized Lagopus switch sw1. The XML template for thisscenario can be found at Appendix 9.1.

Page 54: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

36 Chapter 4. SDN Traffic Monitoring Environment

Figure 4.2: Network topology of simple scenario

In this first scenario with Lagopus it can already be seen from the performanceresults that DPDK technology highly improves the packet processing speed inLagopus. It is specially remarkable the huge improvement seen by the throughputmetric.

Raw-socket (KVM) DPDKRTT 1 ms 0.55 ms

Throughput 8 Mbps 1.2 Gbps

Table 4.1: Network performance metrics for simple scenario

Double scenario

Double scenario presents a linear network topology with two virtualized Lagopusswitches sw1 and sw2, with end hosts h1 and h2 connected to each end of the network.The XML template for this scenario can be found at Appendix 9.2

Page 55: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.4. Scenarios 37

Figure 4.3: Network topology of double scenario

Figure 4.3 shows as second scenario where we can start seeing that a virtualizedLagopus switch introduces a significant delay value even when using DPDKtechnology. The RTT metric has nearly doubled its value with the addition of asecond Lagopus switch. On the other hand, the throughput metric remains the same.

Raw-socket (KVM) DPDKRTT 2 ms 0.95 ms

Throughput 7 Mbps 1.25 Gbps

Table 4.2: Network performance metrics for double scenario

Cascade-Triple scenario

The Cascade-Triple scenario is the largest standalone scenario that has beendeployed due to hardware limitations. This scenario is composed of cascade networktopology with three virtualized Lagopus switches sw1, sw2, and sw3, with end hostsh1 and h2 connected to each end of the network. The XML template for this scenariocan be found at Appendix 9.3

Page 56: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

38 Chapter 4. SDN Traffic Monitoring Environment

Figure 4.4: Network topology of cascade-triple scenario

The network performance results of this scenario show the delay value introducedby each virtualized Lagopus switch. The RTT metric has now increased its value,confirming that each Lagopus switch adds a delay of approximately 0.5 ms for DPDK.In terms of throughput it can be seen that the network performance starts to sufferwith the addition of third virtualized Lagopus switch. Unlike the RTT metric, it seemsthis particular network metric do not decrease linearly when adding more Lagopusswitches.

h1 ↔ h3 Raw-socket (KVM) DPDKRTT 3 ms 1.6 ms

Throughput 4.5 Mbps 1 Gbps

Table 4.3: Network performance metrics for cascade-triple scenario

4.4.3 Distributed scenarios

The proposed SDN virtual environment supports a distributed configuration thatallows expanding virtual scenarios when hardware limitations make it impossible ina single physical server.

Page 57: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.4. Scenarios 39

VNX allows creating separate virtual scenarios in different physical servers thatlater on can be merged as if they were a unique virtual scenario. The technologyused to connect separate VNX scenarios is GRE tunneling. The physical servers thatrun a VNX scenario which is part of a distributed scenario, will configure a GREtunnel endpoint that will link the node with another node according to the virtualscenario topology. This way, depending on the connections of the virtualized networktopology, a GRE tunnel is created for each pair of servers implicated in the scenario.

Each virtual scenario, part of a distributed scenario, utilizes an Open vSwitchbridge for those virtualized network links that will continue their connections througha neighbor server. In this situation, on those affected network links, Open vSwitchbridges replace the legacy Linux Bridges since Open vSwitch provides importantfeatures such as VLAN tagging or GRE tunnel endpoint configuration. Theseinteresting features allow VNX to easily interconnect separate virtual scenario,building a distributed scenario.

Figure 4.5: GRE tunnel endpoint configuration using OVS fakebridges

Page 58: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

40 Chapter 4. SDN Traffic Monitoring Environment

Figure 4.5 depicts a low-level view of an example of a distributed scenario with twoservers. The first server contains an SDN controller c0 and a Lagopus switch sw1,while the second server contains a Lagopus switch sw2. In this example sw2 wantsto connect to sw1 for the data plane communication, and to the SDN controller c0for the control plane communication. VNX has to configure a GRE tunnel betweenthe two servers to extend the MgmtNet and linksw1-sw2 bridges from one server toanother. An ideal situation would be configuring GRE tunnel endpoints on networkinterfaces dedicated to each one of these bridges, however, in most cases, serversonly have a single network interface. Thus, benefiting from OVS support for VLANtagging, the traffics from MgmtNet and linksw1-sw2 are differenced using 1000 and1001 VLAN tag IDs respectively. Then, the two types of traffic are sent out throughanother OVS bridge which is configured as GRE tunnel endpoint. This is achieved byfollowing the concept of the fake bridge, which is in charge of adding/removing theVLAN tag, acting as an intermediate between the GRE endpoint and the real OVSbridge of the virtual scenario. After sending out the packets, when they arrive at theneighbor server, the flows are split and sent to the right OVS fake bridge based onthe carried VLAN tag ID. The tag is removed and the packet is handed to the pairedOVS bridge, which is the real bridge of the virtual scenario. This configuration canbe extended to more VLAN tag IDs, although overloading the GRE tunnel endpointwith many different flows could lead to a non-desired network bottleneck.

The creation and configuration of the GRE tunnel endpoint with a featured VLANtagging is achieved executing on each implicated server a developed Bash script namedcreate-tunnel-cascade. This script takes as argument the IP address of the remoteserver of the GRE tunnel. It is strictly necessary to execute this script on both serversin order to establish the tunnel correctly. The code for this script can be found atAppendix 10 as well as the code for the cleanup-tunnel script, which removes theGRE tunnel endpoint.

From here bellow, an example distributed scenario is introduced. It is worth notingthat due to network performance issues, Lagopus has been deployed only in DPDKmode in distributed configurations. Initially, the distributed scenarios were testedrunning Lagopus in raw-socket mode, however, its limited performance created abottleneck in the GRE tunnel, which made packets unable to reach a physical serverfrom another.

Double scenario

Double scenario introduces an example distributed scenario composed of twoservers. Each server runs two Lagopus switches with an end host client connected toeach switch. Additionally, the first server runs a Ryu SDN controller that managesall the Lagopus switches appearing in this distributed scenario. The XML templatesfor this distributed scenario can be found at Appendix 9.5 and Appendix 9.6.

Page 59: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.4. Scenarios 41

Figure 4.6: Network topology for distributed double scenario

The following table presents the resulting network performance metrics achieved ina distributed scenario. In order to show the reader the how the GRE tunnel affects tothe network performance, the RTT and throughput metrics have been obtained fromcommunications among different end hosts. Unlike the previous introduced scenarios,for this distributed configuration the connectivity from h1 to h3, h2 to h3, and h1 toh4, has been tested.

h1 ↔ h3 h2 ↔ h3 h1 ↔ h4

RTT 1.7 ms 1.25 ms 2 msThroughput 900 Mbps 900 Mbps 895 Mbps

Table 4.4: Network performance metrics for distributed doublescenario

Looking at performance results of the standalone scenarios [Table 4.2], it is probedthat the GRE tunnel adds an important bottleneck to the virtualized network.Although, it is worth noting that the network performance through the GRE tunnelis dependent on the network infrastructure of the laboratory (i.e. network links,switches, routers). For these tests a Gigabit Ethernet network switch connected both

Page 60: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

42 Chapter 4. SDN Traffic Monitoring Environment

servers. This is the explanation to dropping the throughput metric to a value of 900Mbps.

4.4.4 Modified scenario for paper contribution

For SDN-Mon experiments it was required a testing scenario that re-used someresources available in the testing lab. The reasoning of this was that there was analready installed Ryu SDN controller in hypervisor in the lab. The resulting scenariois similar to the cascade-triple standalone scenario, but in this case such scenariodoes not contain a Ryu SDN controller. Instead, this modified version of the scenariocommunicates the Lagopus switches of the virtual network with an external Ryu SDNcontroller that was already set up.

Unlike the distributed scenario configuration, this scenario is slightly different as itis not possible to the SDN controller node. Hence, we cannot deploy a second VNXvirtual scenario and create fake bridges and GRE tunnels for the switch-controllercommunication among the available physical nodes. In other words, the alreadyrunning Ryu SDN controller has to be re-used without any kind of manipulation.

Therefore, in this modified scenario a third party Ryu SDN controller willestablish communication with the Lagopus switches of a standalone virtual scenario.Concretely, this virtual scenario contains the maximum achieved number of Lagopusswitches in the tested server, which is a total of three switches. The code for thisVNX scenario may be found as cascade_lagopus_ryu_dpdk_external_busy.xml atAppendix 9.4.

Page 61: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

4.4. Scenarios 43

Figure 4.7: Low-level view of modified scenario with an external Ryucontroller

In order for this virtual scenario to work with an external SDN controller, the usualMgmtNet network can no longer be used, since there is no VNX implementation onthe controller’s side. Thus, the public IP address of the SDN controller node willbe the IP address used by the Lagopus switches to set up an OpenFlow channel.Figure 4.7 depicts a low-level view of the implemented scenario, which shows how theLagopus switches utilize the Linux Bridge virbr0 that is created by default on LinuxBridge installation. Such bridge plays the role of the former MgmtNet bridge, but italso includes a SNAT (NAPT) functionality, which allows the Lagopus switches useprivate addresses while access external networks sharing the public IP address of thephysical node where the virtual scenario resides

Page 62: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 63: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

45

Chapter 5

NorthBound Interface (NBI)

SDN paradigm shows a strong development in the controller-switch communication,which receives the name of SouthBound Interface (SBI). However, this new networkparadigm provides another communications side on the controller, where networkoperators can interact with the underlying infrastructure: the NorthBound Interface(NBI). Thanks to this controller-operator communication path, users are going to beable to retrieve critical information to monitor the behavior of the network.

5.1 Ryu REST NBIThe SDN controller chosen for this research work has been SDN controller Ryu.

Despite being a simple controller meant to be used in laboratory scenarios, Ryuprovides a powerful REST NorthBound Interface that can be easily extended as it isbased on a modular architecture.

Just like its laboratory SDN controller brother, POX, Ryu has been developed inPython, what allows network programmers quickly add new functionalities thanksto its high level of abstraction. The key aspect of Ryu is that it has a particularWeb Server function corresponding to Web Server Gateway Interface (WSGI), thusallowing programmers to utilize such function in order to create new REST APIs.

Ryu’s official documentation concerning this topic is quite limited, only providinga brief presentation of existing REST implementations such as ryu.app.ofctl_rest orryu.app.vtep_rest libraries. However, it has been found a Ryubook that takes a lookin depth at one of Ryu’s most common built-in applications, simple_switch_13, andshows the development process of a new REST API for this particular application[25]. We would also like to point out that there is available another good source ofinformation for developers, which is in fact the official GitHub repository of Ryu [26].Ryu software brings by default, inside a folder called app, a wide set of exampleapplications, among which the user may find REST API implementations withdifferent purposes like retrieving network topology information or recorded OpenFlowtables.

Here below we present briefly the key aspects of the proposed structure by theRyubook, which will be followed later to create our own REST API for network

Page 64: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

46 Chapter 5. NorthBound Interface (NBI)

traffic monitoring. For a better understanding, the example under study will be oneof the built-in applications brought by the official Ryu software: the Topology RESTAPI application (see rest_topology.py).

The studied structure proposes a Ryu application that provides a REST interfacefor our controller which is composed of two main classes. Firstly, as depictedin the following fragment of code, a WSGI framework is initialized, allowingprogrammers to add custom Ryu applications. In this example, an applicationcalled topology_api_app , whose Python class is a TopologyAPI, is connected tothe initialized WSGI framework using the register method.

class TopologyAPI(app_manager.RyuApp):_CONTEXTS = {'wsgi': WSGIApplication}

def __init__(self, *args, **kwargs):super(TopologyAPI, self).__init__(*args, **kwargs)wsgi = kwargs['wsgi']wsgi.register(TopologyController, {'topology_api_app': self})

Secondly, the actual Ryu application is programmed within another Python class,like TopologyController in our example. This class contains all the logic required tohandle the HTTP requests sent by the network operators.

class TopologyController(ControllerBase):def __init__(self, req, link, data, **config):

super(TopologyController, self).__init__(req, link, data,**config)↪→

self.topology_api_app = data['topology_api_app']

@route('topology','/v1.0/topology/switches', methods=['GET'])def list_switches(self, req, **kwargs):

return self._switches(req, **kwargs)

The key element of this class is the use of the @route decorator, provided by theWSGI framework. This decorator allows programmers to easily configure the RESTendpoints that the application will serve. As we can see in the above presented code,a new endpoint is configured by setting its name, its URL, and the HTTP methodsallowed.

5.2 Extension for SDN-MonAfter a brief introduction on how to create a custom REST API interface on Ryu,

lets move on to the use case of this thesis: the SDN-Mon traffic monitoring framework.

Page 65: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.2. Extension for SDN-Mon 47

As detailed in previous chapters, SDN-Mon works around a central element whichis the Global Monitoring Table (GMT). Therefore, the approach for this extensionis clear: a new REST API interface will be developed, from where users can makeHTTP GET requests and retrieve all the information contained by such table. Thisallows external network operators to connect their own applications to the Ryucontroller, and obtain the monitoring information gathered by SDN-Mon. Thanksto the utilization of a common standardized API like REST a new possibility opensup to easily real time access to critical information about the network status thatmay be used with multiple purposes.

In this case we are going to see how such an intricate application as SDN-Monis going to be able to talk to the outside only through the thin layer that is theREST API. The resulting compound in the controller acts as a black box betweenthe network cooperators and the network itself.

We will implement this Northbound API following Ryu community’s best practices.As convention the exchanged data format is JSON, hence, SDN-Mon table should beparsed to JSON format and then sent through REST API. The table’s first argumentis a unique ID (hash) of the monitor flow, which is an aspect that leads to two possibleimplementation mechanisms: first one, integrating REST API with Ryu App allowingus to directly access the monitoring table in memory; the second one, sending thewhole table when receiving REST request, to be processed afterwards by the userapplication,which in such case the SDN-Mon REST API would work as a separatemodule. The second option is discarded because of the performance issues in largenetworks that it presents, due to the big amount of information that can be stored inthe tables. Sending big size tables to a great number of users in every REST requestwould provoke a high congestion in the network; on the contrary, the chosen firstpresented option processes the monitoring table directly in memory, highly improvingcommunication within the parties involved as they only ask for the information theyneed.

The problematic presented is that users no longer have the freedom to manipulatethe whole table but they are now only able to access certain pieces of informationthat the controller exposes through the so called REST endpoints. This is why iscrucial to know the users’ needs, such as IP addresses or network protocols, in orderto develop these endpoints.

A simple example of an endpoint would be controller_API_URL/sw_id/id. Herethe user would ask for all monitored flows by a particular switch of the network byspecifying the id query parameter, which corresponds to the ID of the switch.

For such reason, the information stored in a GMT is analyzed and filtered based onall possible matching fields for the monitored flows. These matching fields are listedbelow:

Page 66: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

48 Chapter 5. NorthBound Interface (NBI)

• Switch ID

• Source IP Address

• Destination IP Address

• Source Port

• Destination Port

• Byte count

• Packet count

• Protocol

These fields introduce a set of possible REST requests where users may specifyone particular argument present in the monitoring table; or even, supporting moreelaborated requests such as specifying a network prefix (network IP Address +network mask) or a range of ports.

Before getting into details on the implementation of each endpoint, lets see howGMT’s information is stored inside SDN-Mon application. An actual example takenfrom SDN-Mon shows the structure of a Python dictionary in which the monitoringdata is stored in.

monitoring_table = {1: {

3507289692362160130: {'src_ip': '74.125.226.77','packet_count': 2,'src_port': 20480,'dst_port': 51148,'proto': 6,'byte_count': 128,'dst_ip': '192.168.1.101'

},8340101101733500949: {

'src_ip': '172.16.133.34','packet_count': 10,'src_port': 52420,'dst_port': 16405,'proto': 6,'byte_count': 616,'dst_ip': '192.168.1.101'

}},

Page 67: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.2. Extension for SDN-Mon 49

2: {6971737281905252357: {

'src_ip': '172.16.133.78','packet_count': 1,'src_port': 10982,'dst_port': 56639,'proto': 6,'byte_count': 64,'dst_ip': '192.168.1.101'

}}

}

Our REST API application will inherit SDN-Mon application so it will alwayshave an updated version of this dictionary variable which represents the GMT of thenetwork. At the first level of this dictionary, its keys correspond to the ID of theswitches in the network, while its values correspond to the monitored flows. Then,each of these flows is represented as another dictionary whose key is a hash value ofsuch flow, and its value is a list of key-value pairs that correspond to stored statisticsof the monitored flow.

Now that the structure of a GMT programmed in Python has been understood, wewill explain briefly how each endpoint has been developed. For a better understatingof the resulting application, the code of the Python file sdn_mon_rest.py may befound in Appendix 11.1. The in-detail list of all developed endpoints for the SDN-MonREST API is as follows:

• GET /sdnmon/tableSimplest request of all available. Sends the whole global monitoring table.

• GET /sdnmon/sw_id/<dpid>Sends the monitoring table of a particular switch. The target switch isspecified entering its data path ID value through the variable <dpid>, e.g./sdnmon/sw_id/0000000000000001.

• GET /sdnmon/src_ip/<ip_address>Looks into the global monitoring table and sends a list of all monitoredflows with a particular source IP address. The target source IP address isspecified entering a single IP address through the variable <ip_address>, e.g./sdnmon/src_ip/172.16.133.34.

• GET /sdnmon/src_ip/<ip_address>/netmask/<mask>More advance request which sends a list of monitored flows whosesource IP address belongs to a certain network. The networkaddress is specified by entering the network’s IP address through

Page 68: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

50 Chapter 5. NorthBound Interface (NBI)

variable <ip_address>, and its associated netmask with <mask>, e.g./sdnmon/src_ip/172.16.133.0/netmask/255.255.255.0

• GET /sdnmon/dst_ip/<ip_address>Sends a list of all monitored flows that contain a particulardestination IP address. The target destination IP address is specifiedentering a single IP address through the variable <ip_address>, e.g./sdnmon/dst_ip/192.168.1.101.

• GET /sdnmon/dst_ip/<ip_address>/netmask/<mask>Sends a list of monitored flows whose destination IP address belongs to a certainnetwork. The network address is specified by entering the network’s IP addressthrough variable <ip_address>, and its associated netmask with <mask>, e.g./sdnmon/dst_ip/192.168.1.0/netmask/255.255.255.0

• GET /sdnmon/proto/<protocol_id>Sends a list of monitored flows that use a network protocol in particular.The protocol is specified by entering its associated ID, as specified at[https://github.com/osrg/ryu/blob/master/ryu/lib/packet/in_proto.py],through variable <protocol_id>. Remember that SDN-Mon implementationon Lagopus relies on OpenFlow 1.3 protocol. E.g. for flows that use UDPprotocol the request would be /sdnmon/proto/17.

• GET /sdnmon/src_port/<port_number>Sends a list of monitored flows that contain a particular source port number.The target source port is specified by entering the port number through variable<port_number>, e.g. /sdnmon/src_port/52420.

• GET /sdnmon/src_port/<start_port_number>-<end_port_number>More ample request than the previous. Sends a list of monitored flows thatcontain a source port number which is included in a particular range of portnumbers. The port number range is specified by entering first, the startingpoint through variable <start_port_number>, and second, the ending pointusing <end_port_number>. Both starting and ending points are included.E.g. the following request would cover SSH’s default port number flows:/sdnmon/src_port/10-100.

• GET /sdnmon/dst_port/<port_number>Sends a list of monitored flows that use a particular destination port number.The target destination port is specified by entering the port number throughvariable <port_number>. E.g. for HTTP flows, the request would be/sdnmon/dst_port/80.

• GET /sdnmon/dst_port/<start_port_number>-<end_port_number>Sends a list of monitored flows whose destination port number is includedin a particular range of port numbers. The port number range is specifiedby entering first, the starting point through variable <start_port_number>,

Page 69: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.3. Sample external applications 51

and second, the ending point using <end_port_number>, e.g./sdnmon/dst_port/50000-60000.

• GET /sdnmon/packet_count/<threshold>Sends a list of monitored flows whose current packet count exceeds a particularthreshold. This threshold is specified by entering an integer value throughvariable <threshold>, e.g. /sdnmon/packet_count/10.

• GET /sdnmon/byte_count/<threshold>Same behaviour as the previous endpoint. In this case, the applicationsends a list of monitored flows whose current byte count exceeds a particularthreshold. This threshold is specified by entering an integer value throughvariable <threshold>, e.g. /sdnmon/packet_count/600.

It is worth noting that this set of endpoints can be greatly expanded by combiningeach other in a query string structure similar to URL?switch_id=. . .&dst_port=. . .[27], however, Ryu APIs provided examples are limited to static URLs and developinga full complete REST API for SDN-Mon is not our main objective. Thus, theimplementation of a query string based REST API is out of scope of this work andis left up to future user needs.

5.3 Sample external applicationsWe can start explaining the final architecture that has been composed in order to

provide support to some sample external applications that have different purposes aswe will see later in this chapter. Nevertheless, understanding the building process ofsuch architecture requires answering some questions that come to mind.

First quick question that raises the monitoring implementation is that just withthe information provided by SDN-Mon, we, as network operators, do not know whichexact path of the network is taking a particular flow. For instance, a third partydevelops a new application that relies on SDN-Mon to meet QoS with its clients.Assuming that an end host client should receive 10 Mbps of connection, and using thestatistics that SDN-Mon has recorded, this application finds the client’s bandwidthis below that value. Then, how the application can interact with the network toimprove the clients’ bandwidth? Maybe using other Ryu’s built-in REST APIs, butthe application still does not know the followed path by the flow, so there is notenough knowledge to redirect this traffic through a more suitable path.

Other similar question arises when it comes to security purpose applications.Thanks to the information obtained from SDN-Mon, it is possible to detect possibleattacks in real-time by analyzing many different flows that circulate throughout thenetwork. However, the problem of interacting with the network remains, as thehypothetical application does not know the path taken by the malicious traffic, thus,it cannot be stopped in an efficient way.

Page 70: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

52 Chapter 5. NorthBound Interface (NBI)

In summary, SDN-Mon provides useful statistics that allow the network operatorsanalyze the current behavior in the network, but this monitoring framework is notenough to work with external applications. It demands network operating andtopology discovery mechanisms in order to build a complete robust environmentFigure 5.1 where external users can retrieve the needed information for their customapplications.

Figure 5.1: Composition of Ryu modules

These above mentioned issues will be analyzed from here below by developingsample applications that will show how a complete workflow SouthBound toNorthbound is finally achieved. Thanks to the virtual SDN-enabled network that hasbeen deployed with VNX software, there is a simulated real environment in whichsuch workflow can be realized. These applications will be simple Python scripts thatwill run on the user’s host for an easier deployment. However, they could also be

Page 71: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.3. Sample external applications 53

executed on a different server, simulating a real scenario as if it was from an externalthird party. As illustrated on Figure 5.2, the proposed applications will make RESTqueries to the Ryu controller, where the SDN-Mon framework resides, making use ofthe available endpoints, specially those that have been implemented for SDN-Mon.

Figure 5.2: External applications talking to SDN-Mon

5.3.1 Top-talkers applications

One of the most interesting aspects of monitoring networks is the ability to usethe monitoring information for security purposes. In particular, a hot topic in thisfield is the detection of anomalies in a network. Having real-time statistics about thebehavior of a network allows network operators to quickly detect suspicious traffics,identify them, and eventually interact with the network in order to mitigate theirpotential side effects.

SDN-Mon framework provides relevant information that could be used to identifyanomalies in a monitored network. The purpose of these developed applications isjust to show an easy example on how to retrieve monitored data and classify it in

Page 72: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

54 Chapter 5. NorthBound Interface (NBI)

way that would help a security application to process it in order to detect possibleongoing attacks in the network. The analysis of this information, using proceduressuch as tautologies, is out of scope of this work.

In our use case, the top-talkers applications are introduced. These applicationsreceive such name because their purpose is finding in real-time which flows are themost actives in a network. Then, these most active flows are displayed in a table whichgets updated periodically. In order to achieve its mean, they send REST requests tothe controller to retrieve the monitoring table from SDN-Mon, through the endpoint/sdnmon/table. These REST requests are sent every interval time, which is a valuepreconfigured by SDN-Mon, and it is the interval that the frameworks uses to updatethe global monitoring table from all switches in the network. After each response fromthe controller, the information collected by the application is processed, updating thetop-talkers list that is being displayed in a table. Additionally, this interval timevalue is included in every response, which means in case of an unexpected change ofits value by the SDN-Mon framework, the external third party application will noticesuch change and adjust itself to the new value automatically.

During this processing of the information, the top-talkers are classified into fivedifferent groups based on parameters monitored for the flows. These groups are thefollowing:

• src_ip ⇒ Addition of flows that share the same source IP address

• dst_ip ⇒ Addition of flows that share the same destination IP address

• src_port ⇒ Addition of flows that share the same source port number

• dst_port ⇒ Addition of flows that share the same destination port number

• 5-tuple ⇒ Unique flow, identified by its hash value

The last group, 5-tuple, is different from the other four groups as it classifiesuniquely by flow, while the rest is an addition of multiple flows that share a sameparameter.

The top-talkers for each group are ranked according to other two values extractedfrom the retrieved monitoring table: the byte count and the packet count. Thereis a classification for the two counting parameters, as there may be cases where atop-talker that has the biggest byte count number may not have the biggest amountof packets. Thus, the real-time display for the top-talkers will show two similar tableswith the above mentioned groups [Figure 5.3].

Page 73: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.3. Sample external applications 55

Figure 5.3: Top-talkers display

For more information, the two developed applications: user_app_top_five.py anduser_app_two.py, can be found in Appendix 11.2. The difference between them is,as they name say, the number of top-talkers displayed for each group is five for theformer and just two for the latter.

In summary, these applications are just displaying in real-time the top-talkers ofnetwork, which could correspond to network traffic anomalies. Therefore, this can betreated as simple graphical monitoring implementation, but as it has been said, thenext step would be analyzing these results and triggering alarms, or even acting onthe network.

5.3.2 Topology discovery

The other important issue that has to be addressed in our scenario is operating thenetwork. To do so, the workflow consist of obtaining relevant information about thephysical infrastructure the SDN controller is managing; and afterwards, exploitingsuch data to know which changes have to be applied on the network according tothe network operators’ needs. For this use case, the developed Topology discoveryapplication is introduced.

Page 74: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

56 Chapter 5. NorthBound Interface (NBI)

The goal of this project is monitoring of all flows transmitted throughout thenetwork, and then, after a real-time analysis, interact with the network based on theresults of such analysis. In order to achieve this interaction on the network, a topologydiscovery process must be performed first. This information can be retrieved fromRyu’s built-in topology manager module, rest_topology.py ,which uses a REST API.Current SDN-Mon application sdnmon_rest.py could be run simultaneously with thismodule, which basically provides ryu_topology api library and an associated set ofavailable endpoints, composing its own REST API. Notice that the URL of theseendpoints is completely from SDN-Mon’s as this is an independent module. This listof endpoints is as follows:

• GET /v1.0/topology/switchesSends a list with all the OpenFlow switches of the network.

• GET /v1.0/topology/switches/<dpid>Sends detailed information on one of the above listed switches. The targetswitch is specified by entering its associated data path ID through variable<dpid>. E.g. /v1.0/topology/switches/0000000000000001.

• GET /v1.0/topology/linksSends a list with all available links in the network.

• GET /v1.0/topology/links/<dpid>Sends a list of links connected to a particular switch of the network. The targetswitch is specified by entering its associated data path ID through variable<dpid>. E.g. /v1.0/topology/links/0000000000000001.

• GET /v1.0/topology/hosts/<dpid>Sends a list with all discovered hosts in the network.

• GET /v1.0/topology/hosts/<dpid>Sends a list with all hosts that are connected to a particular switch in thenetwork. The target switch is specified by entering its associated data path IDthrough variable <dpid>. E.g. /v1.0/topology/hosts/0000000000000001.

Using these listed endpoints, a custom Python library, topo_app.py, is built todemonstrate how network topology information can be retrieved in an SDN-enablednetwork. This library is meant to be used by third party applications, such as networktopology graphic visualization, or for real-time measurement of network-relatedstatistics. Its full code may be found at Appendix 11.3.

This library can be split into two main features. First, providing a quick viewof current list of switches, links, and discovered hosts in the monitored network.This is achieved by simply asking each one of these three global listing endpoints,and printing the results in the terminal. In second place, the “big router” featureis introduced. This is a mechanism that aims to simplify the underlying networktopology complexity. As its name says, what this application does is treat the whole

Page 75: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.3. Sample external applications 57

mesh of switches and links as big black box that can behave as a big router witha routing table associated. This routing table is composed of IP addresses thatcorrespond to end hosts that have connected to the network. Each one of theserecorded IP addresses has an interface associated through which can be reached.These “interfaces” are built using tuples that have a (switch ID, port ID) structure.An example is shown on the following Figure 5.4.

Figure 5.4: Big router approach

Following this mechanism, after reading some time, the monitored network wouldbecome a logical big router with a routing table that can be of great use. The routingtable shows in a glimpse which end hosts are connected to the network and throughwhich points of connection, what is interpreted in this model as the interfaces of thebig router.

This kind of information can be used with multiple purposes such as network trafficengineering or security, where particular attacks can be quickly stopped by actingon the affected interfaces of the big router. A simple Distributed Denial-of-service(DDoS) attack could be processed by observing which points of connection to thenetwork are being used by the already tracked attackers. With such knowledge, thenetwork operator can intervene directly on the affected interfaces of the network,mitigating the effects of the attack, or even stopping it completely.

Hence, thanks to the built routing table, network operators could implement newservices that can rely easily on the classified information following a black box model.

Discovery mechanism

A special section is dedicated to explain how Ryu’s built-in topology discoverymodule works and the problems that have been addressed during its deployment in

Page 76: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

58 Chapter 5. NorthBound Interface (NBI)

this project’s virtual SDN environment. In order to discover the network’s topology,Ryu utilizes a common procedure which consists of the periodic injection of LLDPpackets into the network.

Whilst traversing the network, the Ryu controller handles the Packet-in andPacket-out events of these injected LLDP packets, learning after extractinginformation from them in the process. By repeating this scheme, the controller isable to discover all the links that belong to the network and how they are connectedto the network switches. Thus, utilizing this mechanism, the controller gathersenough knowledge to virtually recreate a periodically updated version of the network’stopology.

Implementation in VNX

The deployment of Ryu’s built-in application for topology discovery in ourSDN-enabled environment showed an important singularity on the LLDP learningprocess due to VNX virtualization technology. After executing the topology discoveryapp, an LLDP communication error was spotted as no links were discovered by theapplication. LLDP packets were actually being flooded into the network from allswitches interfaces, but they did not continue their paths as they were not beingreceived.

For the SDN-enabled environment, VNX employs Linux Bridge to emulate linksbetween switches and end hosts. The root cause of Ryu’s inability to discover linkswas that these Linux Bridges were silently dropping all LLDP messages. The reasonto this is that the discovery mechanism floods the network with LLDP packetswhose destination MAC address is the MAC multicast address 01-80-C2-00-00-0E,however, such address is within the range reserved by IEEE 802.1D-2004 for protocolsconstrained to an individual LAN. Hence, LLDPDU will not be forwarded by MACBridges that conform to this IEEE standard.

A possible fix to this issue [28] consists of changing the group_fwd_mask parameterof every Linux Bridge used in the environment. In order to let Linux Bridge forwardmulticast LLDP packets, the group_fwd_mask parameter should be changed asfollows:

echo 16384 > /sys/class/net/brX/bridge/group_fwd_mask

Where brX corresponds to the name of each bridge in the environment. At [29]there is detailed information on the patch for Linux Bridge to forward frames withinthe 01-80-C2-00-00-xx range.

Page 77: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.4. SDN-Mon + Topology combo 59

Alternative Python implementation

Additionally, it must be noted that there is an alternative to this implementation,which would be extending the SDN-Mon REST module to perform the topologydiscovery. However, found proposals such as [30] do not meet our criteria, as theysuggest acknowledging and updating switches and links discovered in the networkonly when an EventSwitchEnter event is triggered. This is not ideal as we expectconstant updates on the network topology, specially for link failures or end hosts thatcome and go. Therefore, in order to retrieve this updated information, other possibletriggered events such as Packet-in have to be extended, so their associated methodsrecord such information during the event. In particular, the implementation of thisdiscovery mechanism in a Packet-in event would be very costly, since the high load ofLLDP packets would completely overload the network’s controller.

In summary, it is better to keep the topology discovery feature in a separate moduleunless the Ryu application to be developed relies on network topology information,which is not the case for the SDN-Mon framework.

5.4 SDN-Mon + Topology comboThis application is the result of combining SDN-Mon framework capabilities with

the Ryu’s built-in topology discovery module. SDN-Mon provides important data ofmonitored flows, while the topology discovery module gives us detailed information ofthe monitored network. Mixing all the retrieved information, an external applicationcould detect which exact path is being taken by a monitored flow. This couldbe of great use for purposes such as network traffic visualization, network trafficengineering, or even security applications, it can be seen in a glimpse which paths aretaken by malicious flows, hence allowing network operators to conveniently operateon the network while having information on the underlying physical infrastructure.

Page 78: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

60 Chapter 5. NorthBound Interface (NBI)

Figure 5.5: Flows movement detection

In Figure 5.5, the mechanism employed to address the flow movement issue isanalyzed. The example topology shows three OpenFlow switches interconnected intriangle mode, each switch with an end host connected to itself. Client h1 sendstraffic to h2, identified as flow f1, that goes directly from switch s1 to switch s2. Onother hand, client h2 sends back traffic to h1, labeled as flow f2, but in this casethis flow takes a longer path through s2, s3, and eventually s1. Using SDN-Moninformation, the application knows which is the source and which is the destinationof each flow using the source and destination IP address values from the recorded5-tuple. However, just with this information, the application does not know whetherboth flows took the same path or not, even in this simple example topology.

At this time, the topology discovery module comes in to differentiate between twoflows that look alike. In first place, a list of all switches that monitor an specific flowis generated. Such feature requires a few adjustments on SDN-Mon’s monitoringalgorithm due to its sampling function. SDN-Mon module programs OpenFlowswitches to whether monitor or not a handled flow based on a sampling ratio, whichmeans that in case of a ratio value lower than 1.0, there are chances of a switch thatignores monitoring some flows. In order to generate a complete list of all switchestraversed by a flow, this sampling ratio should be set to 1.0. The other adjustmentwould be generating the list before it is analyzed by SDN-Mon and simplified into theGMT. In order to achieve an efficient monitoring, SDN-Mon framework picks the lessused switch to monitor the flow, and discards the rest of the switches from the list.Thus, searching for a flow in the GMT, would show that a single switch handles suchflow instead of the complete list. In second place, using the discovered topology of

Page 79: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

5.5. Active flows discovery 61

the network, allows to “draw” over the network the path taken by the flow by pickingthe network switches from the list.

Nevertheless, this procedure is not enough since the path taken by a flow isdiscovered, but not its direction. In order to solve this problem, SDN-Mon 5-tupleinformation completes the procedure by utilizing the source and destination IPaddresses to acknowledge which direction took the monitored flow.

It must be noted that this implementation has been been designed but not codedas the other Python applications that have been explained in this section.

5.5 Active flows discoveryIn this section, the last external application that has been developed for this

project is introduced. This application receives the name Active flows discovery.This application was born as a needed tool for a better adjustment of theperformed experiments in the Adaptive and distributed monitoring mechanism inSoftware-Defined Networks paper. Its purpose was finding the actual active flowsthat are monitored by SDN-Mon framework during a complete transmission of thealready mentioned MAWI pcap file. This discovery process was crucial in the analysisof performance tests for the SDN-Mon, as the results could not be based on thousandsof flows that quickly showed up in the network, and then had no activity at all.

The Active flows discovery application follows a simple mechanism in order toconsider certain flows as active, while others as inactive. This mechanism consistson periodically retrieving the whole global monitoring table (GMT) from SDN-Mon,and then, analyze the new data to detect which monitored flows remain active in thenetwork. As it was done in the Top-Talkers application, the Active flows discoveryapplication relies on the interval value, which is attached to all REST responsesfrom the SDN-Mon framework REST API. Therefore, the application makes a RESTrequest to SDN-Mon right after every time the monitoring frameworks updates itsglobal monitoring table with the information from the network’s switches.

The active flow determination process uses a predefined threshold that marks aparticular as active or inactive. The value of such threshold have been estimatedbased on the results from multiple tests using a huge amount of network traffic, likethe MAWI pcap file contains.

The application keeps a record of the updated GMT, and inspects each flow todetect an increase of its byte_count or packet_count parameters since the last updateof the table. In case of no difference in either of these parameters, a flow-associatedcounter called timeout, is increased by one. When the value of the timeout counterfor a particular flow reaches the threshold value, such flow is considered inactive andremoved from the recording table. This timeout counter starts with a zero value when

Page 80: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

62 Chapter 5. NorthBound Interface (NBI)

its associated flow is new in the table. If the counter has an ongoing count, and aftera table update a difference in either the byte_count or the packet_count is detected,the counter is reset to zero as the flow has shown that it is still active. Lastly, whenthe application is closed, the final recording table is saved into a JSON format file,where the total active flows can be found.

In addition, it is worth mentioning that this application could even be moreinteresting as it could be used to enhance the SDN-Mon monitoring algorithm.SDN-Mon framework focuses on monitoring the flows that traverse an operatednetwork; however, after gathering all this information periodically, SDN-Mon doesnot perform any additional analysis besides looking for already existing flows afteran update. Using the Active flows discovery application, a network operator couldscrutinize the current status of the GMT after each update, in order to detect whichflows had ended up inactive. Then, such inactive flows would be removed fromthe GMT, freeing SDN-Mon’s memory after each iteration, making the monitoringalgorithm more efficient. In some areas of interest, such as anomaly detection, theseremoved flows would be recorded into an external database in order to still keep trackof all monitored flows for future studies. The code of this application, active_flows.py,may be found at Appendix 11.4.

Page 81: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

63

Chapter 6

Conclusion and Future Lines

The previous section has shown how small simple applications can benefit froman SDN-enabled network, and more importantly, from the SDN-Mon framework.An important aspect of this work is the flexibility of the SDN-environment thathas been deployed with VNX software. This virtual SDN environment will allowfuture researchers to work on many different parts that go from developing morecomplex external applications, to analyzing and improving the performance in suchenvironment, to trying out real use cases.

6.1 ConclusionsIn this work the new network paradigm SDN has been introduced, and it has

been analyzed how this paradigm presents new ways of monitoring network traffic.The SDN-Mon framework is an implementation that shows how SDN could helpmonitoring networks.

The result of this thesis is a flexible, portable and friendly virtual environment thatsolves a hardware limitation problem. In order to test SDN-Mon newest features forlarger scale networks, virtualization was required since the lack of physical resourcesmade not possible to deploy each element of the architecture in physical servers,in particular the big amount of Lagopus switches. Due to the problems that havebeen faced in the deployment of the virtual environment, the results obtained fromthe scenario may have been less than expected; however, these problems have led toworking on a very disruptive and innovate technology as DPDK is.

This environment has made possible the experimentation with SDN-Moncapabilities in large scale networks. Thanks to this flexible environment, monitoringalgorithm implemented by SDN-Mon has been tested successfully. However, thissame environment cannot be used for real use case tests as the great numbers ofvirtualization layers present in the architecture reduces notably the performance inthe communications. The performance results that have been analyzed in previouschapters vary from real use case tests where each Lagopus software switch wouldrun on dedicated hardware. In addition, such performance results will also varydepending on the server that has been utilized to deploy the proposed SDN virtualenvironment. The reasoning of this is that the achieved performance in this

Page 82: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

64 Chapter 6. Conclusion and Future Lines

environment is extremely hardware-dependent due to the big amount of differentvirtualized resources, in particular those that employ DPDK packet processingtechnology. Hence, running this virtual environment in two servers whose CPUsare slightly different, will lead to different performance results.

Since this environment supports scaling, new networks with more complextopologies could be deployed in order to keep experimenting with SDN-Mon alreadyexisting features, or even testing new ones. This could also be used as a workbenchtool, where SDN-Mon capabilities are well tested before moving to a real productionenvironment such as a telco network or a datacenter network.

The proposed Distributed configuration for large-scale scenarios is a convenientsolution when there is lack of powerful hardware. This way, users can deploy newcomplex scenarios by splitting the network topologies into multiple sub-scenarios thatrun on low-profile hardware. Although, this kind of configuration should not takepriority over a single capable server, since the larger the scenario gets, the worse itbecomes to configure. The creation of GRE tunnels among the targeted servers, or theuse of VLAN tags to differentiate traffic from multiple virtual networks, complicatethe debugging process.

The resulting SDN-enabled virtual environment opens up multiple different threadsfor future works. This work did not stop after the creation of such SDN-enabledvirtual environment, but instead it went beyond and implemented a custom RESTAPI that demonstrates how a complete SDN NorthBound to SouthBound workflowbehaves, as well as how third party applications may benefit from an SDN monitoringframework like SDN-Mon.

However, the SDN-Mon framework implementation presents an importantdrawback. SDN-Mon is an extension of the OpenFlow architecture, but itsimplementation is tied to the Lagopus software. This limits the use of SDN-Monfeatures only to network topologies that run Lagopus switches, which is not avirtual switch software as popular as Open vSwitch. A multi-vendor solution wouldhave allowed users to use the software switch that adjusts better to their needs.Nevertheless, despite being tied to a single software, SDN-Mon proposes an interestingarchitecture that could be taken as the first stone for future works with other virtualswitch software like Open vSwitch.

Summarizing, the proposed virtual environment can be used for laboratories tolearn the SDN paradigm. VNX software allows to easily customize the virtualizednetwork infrastructure, so it can be adjusted to users’ needs. An example of it is thelist of possible third party applications with multiple purposes that has been studiedin the previous section. That list shows all the many different roads that users couldtake for future works. The addition of new modules to the SDN controller, whilebenefiting from SDN-Mon framework, could lead to very interesting applications thatmay help to enhance the networks as we understand today.

Page 83: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6.2. Future Lines 65

6.2 Future LinesThere is a great variety of external applications that could implement new

services relying on the rich monitoring information that the SDN-Mon frameworkprovides. Combining this framework with other modules, like the analyzed Ryu’sbuilt-in topology discovery module, allows network operators to solve network-relatedproblems, or improve existing network services that either was not possible to achievebefore, or was done in a less efficient way. In this section, some of these services arestudied to see how monitoring with SDN affects.

6.2.1 Anomaly detection

As introduced previously in this work, anomaly detection is one of main applicationsconcerning network traffic monitoring. The kind of information that SDN-Monframework provides, the 5-tuple, is ideal for detecting malicious traffic in a network.However, this information is just a starting point since detecting anomalies in amonitored network is not an easy task.

First of all, this information has to be put in a context: how is the topology ofthe network, and which kind of users/traffic is expected to use the network. Inorder to set this context many more modules are required in the SDN controller, e.g.the topology discovery module. Then, real attacks are not as simple to detect asour custom Top-talkers application showed. Considering the most active flows in anetwork as possible attacks is not a good a practice, as real impact attacks are muchmore elaborated. Just using the 5-tuple information is not enough.

Security applications, such as Interaction Detection Systems (IDS) usually utilizecomplex classification mechanisms based on rules or heuristics. These systems gothrough a learning process to be able to anticipate the completion of attacks in thenetwork. Other systems use the so-called signature-based mechanisms, but this kindof systems are less efficient as they only detect attacks that have been previouslyrecorded. Afterwards, this resulting classified information is sometimes send toother security systems, which can eventually detect ongoing attacks by correlatinganomalies that haven been detected previously.

However, these existing security applications are tied to certain points strategicallylocated in the network. The SDN paradigm now would allow security experts toanalyze the network with a global view of it, while benefiting from central managementof the whole underlying infrastructure. The advantage of having such global view ofthe network, is that it puts complete context, where all network anomalies could bemore easily detected.

In this work’s scenario, SDN-Mon framework would provide information on thecurrent flows in a network, which would be later analyzed by a security system likeIDS. Then the workflow would not stop here, since network security operators could

Page 84: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

66 Chapter 6. Conclusion and Future Lines

use the SDN controller capabilities to automatically interact with the network to stopa detected attack.

6.2.2 Traffic Engineering (TE)

When it comes to network management and operations one of the key elements isthe Traffic Engineering (TE). The goal of a network provider is to optimize the useof the existing network resources whilst providing more services to more users.

As of today, different solutions have been introduced that go from on device TE tooffline TE [31]. In the offline mode the network is not able to adapt, since this moderelies on using predefined configurations specific to the operated network. In the ondevice approach, the network devices themselves had to optimize the use of networklinks using protocols such as RSVP. This is costly when networks become large, asthere is no network wide optimization, and the flooding of protocols into the networklike RSVP, adds a high overhead that affects the performance as well as makes themaintenance operations more complex.

The SDN paradigm, simplifies the operation of TE on the network, as it decouplesthe logic from the network devices and moves it to SDN external applications. Havinga global view of the network, and the ability to program network functionalities intothe network devices, achieves a simplified TE solution that can adapt to changesefficiently.

A very simple use case is implementing the Dijkstra algorithm and applying newrules on the network switches as needed. In our SDN environment, an externalapplication would use SDN-Mon information to track monitored flows. A new treecan be created logically by such application using the updated information on thenetwork’s topology. Then, in real-time, the application would install rules in thenetwork devices to redirect each flow through the best path available. All thismechanism is performed without the intervention of distributed protocols. Then,using flow-related information, this behavior could be modified to readjust to solvenetwork service needs.

Summarizing, the SDN paradigm allows users to develop independent externalapplications that implement completely new ways to achieve TE.

6.2.3 Network traffic measurement

Using real-time statistics captured by the OpenFlow switches in a network, externalapplications could generate real-time metrics of the network such as RTT, latency,jitter, or current throughput for every available link in the network. In our virtualenvironment SDN-Mon framework provides real-time statistics recorded by eachswitch for each handled flow. In this section, ideas for the implementation of some ofthese network metrics are presented.

Page 85: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6.2. Future Lines 67

Throughput

In the case of calculating the current throughput in a link, the implementationis straightforward. The idea would be taking the periodic update interval used bySDN-Mon algorithm, and dividing it by the difference of the byte_count parameter inthe timestamps t and t-1, where t is the new update and t-1 is the previous update.

throughputflowi=

byte_countt − byte_countt−1 ∗ 8bitsinterval_time

(6.1)

The result of Equation 6.1 is the throughput of flow i at timestamp t. It has tobe noted that this corresponds to the throughput of a particular flow, hence, thisamount of bits per second are traversing each link taken by the flow in the network.In order to obtain the total throughput in a link j the amount will be the addition ofthe throughput of all flows traversing such link [Equation 6.2]

throughputlinkj =i∑0

throughputflowij(6.2)

Latency

Another key network metric is the latency, also considered as the delay. This metricmay be defined as the amount of time it takes a packet of data to get from a sourcepoint to a destination point. In the case of our SDN environment, these designatedpoints could be the source and destination end hosts connected to the network, andthe measured packets would correspond to the flows generated between them.

In order to measure this type of metric, the 5-tuple information provided by theSDN-Mon framework can be used for each monitored flow. In a similar fashion asthe calculation of the throughput metric, the idea is taking the difference of thepacket_count parameter in the timestamps t and t-1 for each monitored flow.

delayflowi=

update_interval

packet_countt − packet_countt−1

(6.3)

As depicted in Equation 6.3, the calculated delay for a flow i would be the updateinterval time of SDN-Mon algorithm, divided by the difference of the packet_countin different instants of time.

However, for this network metric, measuring its value in a particular link of thenetwork is not as easy as it has been proposed for the throughput metric. In thiscase, the latency of a link in the network is not the addition of the delays of flowsthat traverse such link. Nor is it the result of the equation for the delay of a flow i,since the packet ordering for each flow and each link is critical. In a certain intervalof time, the number of sent packets of a flow i can be much greater than the numberof packets of a flow j due to previous queue ordering in the network switches. Thus,

Page 86: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

68 Chapter 6. Conclusion and Future Lines

different flows will present different delay values for a same network link for eachupdate interval of time. The measurement of the delay for each link is out of scopeof this work, and is left up to future users’ work.

Jitter

The network metric called jitter is just the variation of delay. This metric could bemeasured for each flow by keeping a record of its previous delay values, and calculatingthe difference with the new delay value after an update from SDN-Mon framework[Equation 6.4].

jitterflowi= delayflowi

(t)− delayflowi(t− 1) (6.4)

6.2.4 Quality of Service (QoS)

Highly tied to traffic engineering and network traffic measurement, the Quality ofService (QoS) is another key aspect to consider in a network. The services supportedby network providers must meet the agreed requirements with the users of suchservices. Among these requirements, the bandwidth or the delay metrics are themost common. For this reason, the network traffic measurement is an essentialtool that allows network providers to monitor all these requirements, and trigger thecorresponding alarms when the service is not meeting the agreement. Consequently,these alarms may trigger the necessary traffic engineering mechanisms in order toadjust the network behavior.

Page 87: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6.2. Future Lines 69

Figure 6.1: QoS system usage with more applications

An external application could implement this quality of service system, in which thenetwork traffic engineering and the network traffic measurement systems interconnect[Figure 6.1]. This would allow network operators to enhance network services’maintenance in a programmable, centralized way. Using SDN-Mon framework, adetailed monitoring process is achieved for each flow that traverses the network. Theapplication that implements the QoS system would map the agreed requirements toeach flow and interact with the network accordingly using the mechanisms exposedby the network traffic engineering system.

6.2.5 Network traffic visualization

An interesting service that network operators can implement with the new SDNparadigm, is the network traffic visualization. Using modules in the SDN controller,like the previously analyzed discovery topology module, would provide an updatedview of the whole monitored network. This kind of modules can identify the mainelements that compose a certain network: switches, links and end hosts. Usingthis sort of information, an external application might be able to recreate a logicalarchitecture of the monitored network. However, given the capabilities of an SDNnetwork, the visualization of the network could be extended in many ways.

An example of this type of application is the ONOS SDN platform GUI topologyviewer [32]. This projects aims at displaying real-time information on the monitored

Page 88: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

70 Chapter 6. Conclusion and Future Lines

network. As depicted in Figure 6.2, the application shows a complete GUI wherenetwork operators can see valuable information like detailed information of eachswitch as well as information on the controllers and monitored flows.

Figure 6.2: ONOS GUI topology viewer

A very intuitive feature is the ability to highlight the path taken by a certain flowin the displayed topology. Another useful feature for network planning is that theGUI supports including geographical coordinates for each monitored switch, so theycan be printed accordingly on a real geographical map. In this figure’s example, eachswitch have been mapped to its geolocation in the United States of America. As itcan be seen, the resulting topology takes the shape of the country.

6.3 Packet generator. Enhanced performance testsPktgen is a software that allows the dynamic generation of network traffic flows

[33]. Pktgen is based on DPDK processing framework in order to achieve high packetgeneration speed. Among the supported features, it is worth noting that it can beeasily configured using an interactive CLI, where the user can customize flows to betransmitted.

Using this packet generator matched our interests about experimenting withSDN-Mon in our SDN-enabled virtual environment. The customization of networktraffic flows with Pktgen allowed the generation of a certain number of knownpredefined flows. The proposed scenario [Figure 6.3 had an end host which acted as asender using Pktgen. These generated flows were sent throughout the network to andend host which acted as a receiver on the other side of the network. For simplicity, themonitoring of the received traffic was performed with the already installed softwarebmon [34], even though Pktgen also supports working in receiver mode. The firstversion code for this implementation can be found in Appendix 9.7.

Page 89: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6.3. Packet generator. Enhanced performance tests 71

Figure 6.3: Current implemented scenario with Pktgen

However, this architecture presented an important drawback that could not beavoided. In this scenario, running Pktgen software in KVM virtual machine has nopoint, since the performance test will show bad results due to the virtualization layer.For this initial deployment, Pktgen software was deployed in a virtual machine in asimilar fashion as the Lagopus switches as it requires a DPDK-enabled NIC. Due tothe absence of servers with such type of NIC, a virtual machine had to be created forPktgen to work with DPDK.

A proposed future work would be deploying a similar scenario in which the senderand the receiver run Pktgen software on physical servers [Figure 6.4]. Using thisconfiguration, users can truly benefit from the high speed packet generation of aDPDK based network software. In addition, this scenario could be very interestingto show the impact that the virtualization of Lagopus switches may have on theperformance tests.

Page 90: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

72 Chapter 6. Conclusion and Future Lines

Figure 6.4: Proposed future scenario with Pktgen

6.4 Real use case: datacenter networksAn interesting real use case would a datacenter network. For such scenario, the

most famous open source cloud software, OpenStack, is taken as example. Themain difference is that OpenStack uses Open vSwitch instead of Lagopus as itsvirtual switch software for interconnecting spawned virtual machines. Nevertheless,a hypothetical approach using Lagopus for this real use case would be the same.

In a datacenter where OpenStack is deployed, the resulting network is a mesh ofhypervisors interconnected with the datacenter’s fabric. These hypervisors are justservers in which virtual machines are spawned. Each virtual machine is connectedto an internal virtual switch which runs on the hypervisor. This virtual switchallows the attached VMs to talk to other VMs residing in the same hypervisor,and more importantly, talk to others systems outside the hypervisor. In order toprovide connectivity throughout datacenter’s fabric, the virtual switch that runs oneach hypervisor performs concrete rules to communicate each VM with others. In

Page 91: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6.4. Real use case: datacenter networks 73

most cases this is a combination of the already studied OpenFlow protocol with otherparticular network protocols such as OVSDB in the case of Open vSwitch.

Figure 6.5: Comparison between Lagopus as virtual switch and asphysical switch

Figure 6.5 illustrates a comparison between the virtual switch deployment with thephysical scenario that would be seen in a telco network. A virtual switch that runsinside a hypervisor can be interpreted as a physical switch that is connected to thedatacenter’s fabric. In order to achieve high packet processing speed, the hypervisorprovides DPDK-enabled physical interfaces. This switch would have attached to itselfend hosts, which in the hypervisor case are virtual machines, while in the second casethey could be physical servers. Assuming this is an SDN-enabled network, thesephysical switches will be remotely programmed by an SDN controller.

The resulting architecture is similar to the telco case that has been analyzed inthis work. In this architecture the hypervisors behave like physical SDN-controlledswitches, while the end hosts are virtual machines that connect dynamically to thedatacenter network. Therefore, the virtual networks created above the existingdatacenter infrastructure can be treated as they were real physical networks. Inother words, studied network services such as detection of anomalies, QoS, network

Page 92: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

74 Chapter 6. Conclusion and Future Lines

traffic engineering, network traffic visualization, or even network billing services, canbe applied to these dynamically generated virtual networks.

A very interesting application for this use case is the network traffic monitoring forsecurity purposes, in particular the detection of anomalies. In a datacenter network,the north-south and east-west communications can be monitored as the flows traversethe core fabric, however, the communication between virtual machines that reside inthe same hypervisor is a tricky situation.

Figure 6.6: Attacker inside a datacenter network

Figure 6.6 shows a hypothetical scenario in a datacenter network, with twohypervisors containing multiple virtual machines. Among these virtual machines,VM2 is under control of an attacker, which has been taking as starting point to sendmalicious traffic to the network. The communication between virtual machines VM2and VM3, residing in different servers, can be monitored as the packets go throughoutthe datacenter from one server to another using a particular encapsulation techniquelike VXLAN, which can be easily inspected using a monitoring framework. However,for a communication between virtual machines VM1 and VM2, traditional tools ofnetwork traffic monitoring cannot achieve their purpose, since the transmitted packetsnever leave the hypervisor.

Page 93: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

6.4. Real use case: datacenter networks 75

Due to the new capabilities that SDN introduce, the SDN controller possesses afine-grained control over the virtual switch the virtual machines are attached to. Thus,using a network traffic monitoring framework like SDN-Mon in a similar fashion asin a real network, the virtual switch of each hypervisor would be able to collect flowsbetween VM1 and VM2, and later on send the information to the SDN controller.This information will be exposed by the SDN-Mon API REST, so it can be processedby external application that implement network services. Using this data, an anomalydetection application could detect that VM2 has been hacked and is trying to attackVM1. This would trigger an alarm in order to notify other application of the SDNcontroller to act on the virtual switch to block traffic from VM2 until the attacker isneutralized.

Page 94: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation
Page 95: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

77

Chapter 7

Appendix A: Lagopus configuration

This appendix contains two Bash scripts that have been developed to configureLagopus during the startup process of a VNX virtual scenario. The first scriptconfigures Lagopus managed ports to work on raw-socket mode; while the secondscript configures them to work on DPDK mode.

7.1 start-lagopus-raw.sh

1 #!/bin/bash2

3 #4 # Generator of lagopus.dsl configuration file for raw-socket mode5 #6 # Argument = -c controller -d dpid -i interface7 #8

9 usage()10 {11 cat <<EOF12 **** USAGE ****13 This script generates lagopus.dsl configuration file14 and places the resulting file inside /usr/local/etc/lagopus/15 OPTIONS:16 -h Show this message17 -c Controller address ( protocol:IP_address:port )18 -d Lagopus switch dpid identifier19 -i Lagopus switch dataplane interface20 EOF21 }22

23 #24 # Input parameters processing25 #26

27 while getopts “:hc:d:i:” arg; do

Page 96: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

78 Chapter 7. Appendix A: Lagopus configuration

28 case $arg in29 h)30 usage31 exit 132 ;;33 c)34 IFS=':' read -ra ADDR <<< "$OPTARG"35 if [ "${#ADDR[@]}" -lt 3 ]; then36 printf "\n**** ERROR MESSAGE ****\n"37 printf "\nController input must follow <protocol:IP_address:port>

structure\n\n"↪→

38 exit 139 fi40 CTRL_PROTO=$(echo "${ADDR[0]}" | awk '{print tolower($0)}')41 CTRL_IP="${ADDR[1]}"42 CTRL_PORT="${ADDR[2]}"43 ;;44 d)45 DPID=$OPTARG46 ;;47 i)48 array+=("$OPTARG")49 ;;50 ?)51 usage52 exit53 ;;54 esac55 done56

57 if [[ -z $ADDR ]] || [[ -z $DPID ]] || [[ -z $array ]]58 then59 usage60 exit 161 fi62

63

64 #65 # Start of lagopus.dsl writting process66 #67

68 touch /tmp/lagopus.dsl69

70 # Default logging file71 echo " " >> /tmp/lagopus.dsl

Page 97: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

7.1. start-lagopus-raw.sh 79

72 echo "log -file /var/log/lagopus.log -debuglevel 8 " >> /tmp/lagopus.dsl73

74 # Switch's interfaces configuration75 echo " " >> /tmp/lagopus.dsl76 for i in "${!array[@]}"; do77 ID=$(($i+1))78 if [ "$ID" -gt 9 ];then79 echo "interface interface$ID create -type ethernet-rawsock -device

${array[$i]}" >> /tmp/lagopus.dsl↪→

80 IFACES+=("$ID")81 else82 echo "interface interface0$ID create -type ethernet-rawsock -device

${array[$i]}" >> /tmp/lagopus.dsl↪→

83 IFACES+=("0$ID")84 fi85 done86

87 # Switch's ports configuration88 echo " " >> /tmp/lagopus.dsl89 PORT_CONFIG=""90 for i in "${!IFACES[@]}"; do91 PORT_CONFIG+="-port port${IFACES[$i]} $(($i+1)) "92 echo "port port${IFACES[$i]} create -interface interface${IFACES[$i]}" >>

/tmp/lagopus.dsl↪→

93 done94

95 # Channel configuration96 echo " " >> /tmp/lagopus.dsl97 echo "channel channel01 create -dst-addr $CTRL_IP -dst-port $CTRL_PORT -protocol

$CTRL_PROTO" >> /tmp/lagopus.dsl↪→

98

99 # Controller configuration100 echo " " >> /tmp/lagopus.dsl101 echo "controller controller01 create -channel channel01 -role equal

-connection-type main" >> /tmp/lagopus.dsl↪→

102

103 # Internal bridge configuration104 echo " " >> /tmp/lagopus.dsl105 echo "bridge bridge01 create -dpid $DPID -controller controller01 $PORT_CONFIG

-fail-mode secure" >> /tmp/lagopus.dsl↪→

106 echo " " >> /tmp/lagopus.dsl107 echo "bridge bridge01 enable" >> /tmp/lagopus.dsl108

109 #110 # End of DSL file

Page 98: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

80 Chapter 7. Appendix A: Lagopus configuration

111 #

7.2 start-lagopus-dpdk.sh

1 #!/bin/bash2

3 #4 # Generator of lagopus.dsl configuration file for DPDK mode5 #6 # Argument = -c controller -d dpid -i interface7 #8

9 usage()10 {11 cat <<EOF12 **** USAGE ****13 This script generates lagopus.dsl configuration file14 and places the resulting file inside /usr/local/etc/lagopus/15 OPTIONS:16 -h Show this message17 -c Controller address ( protocol:IP_address:port )18 -d Lagopus switch dpid identifier19 -i Lagopus switch dataplane interface20 EOF21 }22

23 #24 # Input parameters processing25 #26

27 while getopts “:hc:d:i:” arg; do28 case $arg in29 h)30 usage31 exit 132 ;;33 c)34 IFS=':' read -ra ADDR <<< "$OPTARG"35 if [ "${#ADDR[@]}" -lt 3 ]; then36 printf "\n**** ERROR MESSAGE ****\n"37 printf "\nController input must follow <protocol:IP_address:port>

structure\n\n"↪→

38 exit 1

Page 99: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

7.2. start-lagopus-dpdk.sh 81

39 fi40 CTRL_PROTO=$(echo "${ADDR[0]}" | awk '{print tolower($0)}')41 CTRL_IP="${ADDR[1]}"42 CTRL_PORT="${ADDR[2]}"43 ;;44 d)45 DPID=$OPTARG46 ;;47 i)48 array+=("$OPTARG")49 ;;50 ?)51 usage52 exit53 ;;54 esac55 done56

57 if [[ -z $ADDR ]] || [[ -z $DPID ]] || [[ -z $array ]]58 then59 usage60 exit 161 fi62

63

64 #65 # Start of lagopus.dsl writting process66 #67

68 touch /tmp/lagopus.dsl69

70 # Default logging file71 echo " " >> /tmp/lagopus.dsl72 echo "log -file /var/log/lagopus.log -debuglevel 8 " >> /tmp/lagopus.dsl73

74 # Switch's interfaces configuration75 echo " " >> /tmp/lagopus.dsl76 for i in "${!array[@]}"; do77 ID=$(($i+1))78 if [ "$ID" -gt 9 ];then79 echo "interface interface$ID create -type ethernet-dpdk-phy -port-number $i"

>> /tmp/lagopus.dsl↪→

80 IFACES+=("$ID")81 else82 echo "interface interface0$ID create -type ethernet-dpdk-phy -port-number $i"

>> /tmp/lagopus.dsl↪→

Page 100: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

82 Chapter 7. Appendix A: Lagopus configuration

83 IFACES+=("0$ID")84 fi85 done86

87 # Switch's ports configuration88 echo " " >> /tmp/lagopus.dsl89 PORT_CONFIG=""90 for i in "${!IFACES[@]}"; do91 PORT_CONFIG+="-port port${IFACES[$i]} $(($i+1)) "92 echo "port port${IFACES[$i]} create -interface interface${IFACES[$i]}" >>

/tmp/lagopus.dsl↪→

93 done94

95 # Channel configuration96 echo " " >> /tmp/lagopus.dsl97 echo "channel channel01 create -dst-addr $CTRL_IP -dst-port $CTRL_PORT -protocol

$CTRL_PROTO" >> /tmp/lagopus.dsl↪→

98

99 # Controller configuration100 echo " " >> /tmp/lagopus.dsl101 echo "controller controller01 create -channel channel01 -role equal

-connection-type main" >> /tmp/lagopus.dsl↪→

102

103 # Internal bridge configuration104 echo " " >> /tmp/lagopus.dsl105 echo "bridge bridge01 create -dpid $DPID -controller controller01 $PORT_CONFIG

-fail-mode secure" >> /tmp/lagopus.dsl↪→

106 echo " " >> /tmp/lagopus.dsl107 echo "bridge bridge01 enable" >> /tmp/lagopus.dsl108

109 #110 # End of DSL file111 #

Page 101: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

83

Chapter 8

Appendix B: Root Filesystemgenerators

This appendix contains Bash scripts that generate a new VNX root filesystemfor each element of the SDN virtual environment: LXC rootfs for the Ryu SDNcontroller; LXC rootfs for the end hosts of the scenarios; LXC rootfs for Lagopusswitch in raw-socket mode; KVM rootfs for Lagopus switch in raw-socket mode; anda KVM rootfs for Lagopus switch working in DPDK mode.

8.1 create-rootfs_sdn

1 #!/bin/bash2

3 #4 # Name: create-rootfs_sdn5 #6 # Description: creates a customized LXC VNX rootfs for Ryu SDN controller7 #8 # This file is part of VNX package.9 #

10 # Authors: David Fernández ([email protected])11 # Raul Alvarez ([email protected])12 # Ignacio Domínguez Martínez-Casanueva ([email protected])13 # Copyright (C) 2016 DIT-UPM14 # Departamento de Ingeniería de Sistemas Telemáticos15 # Universidad Politécnica de Madrid16 # SPAIN17 #18 # This program is free software; you can redistribute it and/or19 # modify it under the terms of the GNU General Public License20 # as published by the Free Software Foundation; either version 221 # of the License, or (at your option) any later version.22 #23 # This program is distributed in the hope that it will be useful,24 # but WITHOUT ANY WARRANTY; without even the implied warranty of

Page 102: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

84 Chapter 8. Appendix B: Root Filesystem generators

25 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the26 # GNU General Public License for more details.27 #28 # You should have received a copy of the GNU General Public License29 # along with this program; if not, write to the Free Software30 # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.31 #32 # An online copy of the licence can be found at

http://www.gnu.org/copyleft/gpl.html↪→

33 #34

35 #36 # Configuration37 #38

39 NAME=ryu-controller40 BASEROOTFSNAME=vnx_rootfs_lxc_ubuntu64-16.04-v02541 ROOTFSNAME=$BASEROOTFSNAME-$NAME42 ROOTFSLINKNAME="rootfs_sdn"43

44 # General tools45 PACKAGES="wget iperf unzip telnet xterm curl ethtool man nano"46

47 CUSTOMIZESCRIPT=$(cat <<EOF48 # Modify failsafe script to avoid delays on startup49 sed -i -e 's/.*sleep [\d]*.*/\tsleep 1/' /etc/init/failsafe.conf50 # Add ~/bin to root PATH51 sed -i -e '\$aPATH=\$PATH:~/bin' /root/.bashrc52 # Allow root login by ssh53 sed -i -e 's/^PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config54 # Setup locale settings to en_US55 update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-856 # Install pip57 apt-get update58 apt-get install -y python-pip python-setuptools python-dev libxml2-dev

libxslt-dev↪→

59 # Install Ryu controller60 export LC_ALL=C61 pip install --upgrade pip62 pip install ryu ipaddress63 EOF64 )65

66 function customize_rootfs {67

Page 103: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.1. create-rootfs_sdn 85

68 echo "-----------------------------------------------------------------------"69 echo "Customizing rootfs..."70 echo "--"71 #echo "$CUSTOMIZESCRIPT"72 echo lxc-attach -n $ROOTFSNAME -P $CDIR -- bash -c "$CUSTOMIZESCRIPT" -P $CDIR73 lxc-attach -n $ROOTFSNAME -P $CDIR -- bash -c "$CUSTOMIZESCRIPT" -P $CDIR74

75 }76

77 #78 # Do not modify under this line (or do it with care...)79 #80

81 function create_new_rootfs {82

83

84 #clear85

86 echo "-----------------------------------------------------------------------"87 echo "Deleting base and new rootfs directories..."88 rm -rf ${BASEROOTFSNAME}89 rm -rf ${ROOTFSNAME}90 rm -f ${BASEROOTFSNAME}.tgz91

92 # Download base rootfs93 echo "-----------------------------------------------------------------------"94 echo "Downloading base rootfs..."95 vnx_download_rootfs -r ${BASEROOTFSNAME}.tgz96

97 mv ${BASEROOTFSNAME} ${ROOTFSNAME}98 echo "--"99 echo "Changing rootfs config file..."

100 # Change rootfs config to adapt it to the directory wher is has been downloaded101 sed -i -e '/lxc.rootfs/d' -e '/lxc.mount/d' ${ROOTFSNAME}/config102 echo "103 lxc.rootfs = $CDIR/${ROOTFSNAME}/rootfs104 lxc.mount = $CDIR/${ROOTFSNAME}/fstab105 " >> ${ROOTFSNAME}/config106

107 }108

109 function start_and_install_packages {110

111 echo "-----------------------------------------------------------------------"112 echo "Installing packages in rootfs..."

Page 104: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

86 Chapter 8. Appendix B: Root Filesystem generators

113

114 # Install packages in rootfs115 lxc-start --daemon -n $ROOTFSNAME -f ${ROOTFSNAME}/config -P $CDIR116 echo lxc-wait -n $ROOTFSNAME -s RUNNING -P $CDIR117 lxc-wait -n $ROOTFSNAME -s RUNNING -P $CDIR118 sleep 3119 lxc-attach -n $ROOTFSNAME -P $CDIR -- dhclient eth0120 lxc-attach -n $ROOTFSNAME -P $CDIR -- ifconfig eth0121 lxc-attach -n $ROOTFSNAME -P $CDIR -- ping -c 3 www.dit.upm.es122 lxc-attach -n $ROOTFSNAME -P $CDIR -- apt-get update123 lxc-attach -n $ROOTFSNAME -P $CDIR -- bash -c "DEBIAN_FRONTEND=noninteractive

apt-get -y install $PACKAGES"↪→

124

125 # Create /dev/net/tun device126 lxc-attach -n $ROOTFSNAME -P $CDIR -- mkdir /dev/net127 lxc-attach -n $ROOTFSNAME -P $CDIR -- mknod /dev/net/tun c 10 200128 lxc-attach -n $ROOTFSNAME -P $CDIR -- chmod 666 /dev/net/tun129

130 }131

132 function create_rootfs_tgz {133 echo "-----------------------------------------------------------------------"134 echo "Creating rootfs tgz file..."135 rm $BASEROOTFSNAME.tgz136 tmpfile=$(mktemp)137 find ${ROOTFSNAME} -type s > $tmpfile138 #cat $tmpfile139 size=$(du -sb --apparent-size ${ROOTFSNAME} | awk '{ total += $1 - 512; }; END

{ print total }')↪→

140 size=$(( $size * 1020 / 1000 ))141 LANG=C tar -cpf - ${ROOTFSNAME} -X $tmpfile | pv -p -s $size | gzip >

${ROOTFSNAME}.tgz↪→

142 for LINK in $ROOTFSLINKNAME; do143 rm -f $LINK144 ln -s ${ROOTFSNAME} $LINK145 done146 }147

148

149 #150 # Main151 #152 echo "-----------------------------------------------------------------------"153 echo "Creating VNX LXC rootfs:"154 echo " Base rootfs: $BASEROOTFSNAME"

Page 105: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.2. create-rootfs_lxc 87

155 echo " New rootfs: $ROOTFSNAME"156 echo " Rootfs link: $ROOTFSLINKNAME"157 echo " Packages to install: $PACKAGES"158 echo "-----------------------------------------------------------------------"159

160 # move to the directory where the script is located161 CDIR=`dirname $0`162 cd $CDIR163 CDIR=$(pwd)164

165 create_new_rootfs166 start_and_install_packages167 customize_rootfs168 lxc-stop -n $ROOTFSNAME -P $CDIR # Stop the VM169 rm lxc-monitord.log # Delete log of the VM170 create_rootfs_tgz171

172 echo "...done"173 echo "-----------------------------------------------------------------------"

8.2 create-rootfs_lxc

1 #!/bin/bash2

3 #4 # Name: create-rootfs_lxc5 #6 # Description: creates a customized LXC VNX rootfs for end hosts of the virtual

scenarios↪→

7 #8 # This file is part of VNX package.9 #

10 # Authors: David Fernández ([email protected])11 # Raul Alvarez ([email protected])12 # Ignacio Domínguez Martínez-Casanueva ([email protected])13 # Copyright (C) 2016 DIT-UPM14 # Departamento de Ingeniería de Sistemas Telemáticos15 # Universidad Politécnica de Madrid16 # SPAIN17 #18 # This program is free software; you can redistribute it and/or19 # modify it under the terms of the GNU General Public License20 # as published by the Free Software Foundation; either version 2

Page 106: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

88 Chapter 8. Appendix B: Root Filesystem generators

21 # of the License, or (at your option) any later version.22 #23 # This program is distributed in the hope that it will be useful,24 # but WITHOUT ANY WARRANTY; without even the implied warranty of25 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the26 # GNU General Public License for more details.27 #28 # You should have received a copy of the GNU General Public License29 # along with this program; if not, write to the Free Software30 # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.31 #32 # An online copy of the licence can be found at

http://www.gnu.org/copyleft/gpl.html↪→

33 #34

35 #36 # Configuration37 #38

39 NAME=mawi40 BASEROOTFSNAME=vnx_rootfs_lxc_ubuntu64-16.04-v02541 ROOTFSNAME=$BASEROOTFSNAME-$NAME42 ROOTFSLINKNAME="rootfs_lxc"43

44 # General tools45 PACKAGES="wget iperf unzip telnet xterm curl ethtool man nano gzip traceroute"46

47 CUSTOMIZESCRIPT=$(cat <<EOF48 # Modify failsafe script to avoid delays on startup49 sed -i -e 's/.*sleep [\d]*.*/\tsleep 1/' /etc/init/failsafe.conf50 # Add ~/bin to root PATH51 sed -i -e '\$aPATH=\$PATH:~/bin' /root/.bashrc52 # Allow root login by ssh53 sed -i -e 's/^PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config54 # Setup locale settings to en_US55 update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-856 # Install necessary packages for tcpreplay and bmon tools57 apt-get update58 apt-get install -y tcpreplay git build-essential make libconfuse-dev libnl-3-dev

libnl-route-3-dev libncurses-dev pkg-config dh-autoreconf iperf↪→

59 # Pull bmon code from GitHub repo60 cd /root && git clone https://github.com/tgraf/bmon.git61 # Install bmon62 cd /root/bmon && ./autogen.sh && ./configure && make && make install63 # Create tcpreplay folder for original and modified pcap files

Page 107: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.2. create-rootfs_lxc 89

64 mkdir /root/tcpreplay65 # Download sample pcap files66 wget -O /tmp/test.pcap https://s3.amazonaws.com/tcpreplay-pcap-files/test.pcap67 wget -O /tmp/smallFlows.pcap

https://s3.amazonaws.com/tcpreplay-pcap-files/smallFlows.pcap↪→

68 wget -O /tmp/bigFlows.pcaphttps://s3.amazonaws.com/tcpreplay-pcap-files/bigFlows.pcap↪→

69 wget -O -http://mawi.nezu.wide.ad.jp/mawi/samplepoint-F/2017/201702261400.pcap.gz |gunzip -c > /tmp/mawi-sample.pcap

↪→

↪→

70 # Adapt pcap files to virtual environment71 tcprewrite --enet-dmac=00:00:00:00:00:02 --dstipmap=0.0.0.0/0:10.0.0.2/32

--infile=/tmp/test.pcap --outfile=/root/tcpreplay/test-modified.pcap↪→

72 tcprewrite --enet-dmac=00:00:00:00:00:02 --dstipmap=0.0.0.0/0:10.0.0.2/32--infile=/tmp/smallFlows.pcap--outfile=/root/tcpreplay/smallFlows-modified.pcap

↪→

↪→

73 tcprewrite --enet-dmac=00:00:00:00:00:02 --dstipmap=0.0.0.0/0:10.0.0.2/32--infile=/tmp/bigFlows.pcap --outfile=/root/tcpreplay/bigFlows-modified.pcap↪→

74 tcprewrite --enet-dmac=00:00:00:00:00:02 --dstipmap=0.0.0.0/0:10.0.0.2/32--infile=/tmp/mawi-sample.pcap--outfile=/root/tcpreplay/mawi-sample-modified.pcap

↪→

↪→

75 EOF76 )77

78 function customize_rootfs {79

80 echo "-----------------------------------------------------------------------"81 echo "Customizing rootfs..."82 echo "--"83 #echo "$CUSTOMIZESCRIPT"84 echo lxc-attach -n $ROOTFSNAME -P $CDIR -- bash -c "$CUSTOMIZESCRIPT" -P $CDIR85 lxc-attach -n $ROOTFSNAME -P $CDIR -- bash -c "$CUSTOMIZESCRIPT" -P $CDIR86

87 }88

89 #90 # Do not modify under this line (or do it with care...)91 #92

93 function create_new_rootfs {94

95

96 #clear97

98 echo "-----------------------------------------------------------------------"

Page 108: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

90 Chapter 8. Appendix B: Root Filesystem generators

99 echo "Deleting base and new rootfs directories..."100 rm -rf ${BASEROOTFSNAME}101 rm -rf ${ROOTFSNAME}102 rm -f ${BASEROOTFSNAME}.tgz103

104 # Download base rootfs105 echo "-----------------------------------------------------------------------"106 echo "Downloading base rootfs..."107 vnx_download_rootfs -r ${BASEROOTFSNAME}.tgz108

109 mv ${BASEROOTFSNAME} ${ROOTFSNAME}110 echo "--"111 echo "Changing rootfs config file..."112 # Change rootfs config to adapt it to the directory wher is has been downloaded113 sed -i -e '/lxc.rootfs/d' -e '/lxc.mount/d' ${ROOTFSNAME}/config114 echo "115 lxc.rootfs = $CDIR/${ROOTFSNAME}/rootfs116 lxc.mount = $CDIR/${ROOTFSNAME}/fstab117 " >> ${ROOTFSNAME}/config118

119 }120

121 function start_and_install_packages {122

123 echo "-----------------------------------------------------------------------"124 echo "Installing packages in rootfs..."125

126 # Install packages in rootfs127 lxc-start --daemon -n $ROOTFSNAME -f ${ROOTFSNAME}/config -P $CDIR128 echo lxc-wait -n $ROOTFSNAME -s RUNNING -P $CDIR129 lxc-wait -n $ROOTFSNAME -s RUNNING -P $CDIR130 sleep 3131 lxc-attach -n $ROOTFSNAME -P $CDIR -- dhclient eth0132 lxc-attach -n $ROOTFSNAME -P $CDIR -- ifconfig eth0133 lxc-attach -n $ROOTFSNAME -P $CDIR -- ping -c 3 www.dit.upm.es134 lxc-attach -n $ROOTFSNAME -P $CDIR -- apt-get update135 lxc-attach -n $ROOTFSNAME -P $CDIR -- bash -c "DEBIAN_FRONTEND=noninteractive

apt-get -y install $PACKAGES"↪→

136

137 # Create /dev/net/tun device138 lxc-attach -n $ROOTFSNAME -P $CDIR -- mkdir /dev/net139 lxc-attach -n $ROOTFSNAME -P $CDIR -- mknod /dev/net/tun c 10 200140 lxc-attach -n $ROOTFSNAME -P $CDIR -- chmod 666 /dev/net/tun141

142 }

Page 109: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.2. create-rootfs_lxc 91

143

144 function create_rootfs_tgz {145 echo "-----------------------------------------------------------------------"146 echo "Creating rootfs tgz file..."147 rm $BASEROOTFSNAME.tgz148 tmpfile=$(mktemp)149 find ${ROOTFSNAME} -type s > $tmpfile150 #cat $tmpfile151 size=$(du -sb --apparent-size ${ROOTFSNAME} | awk '{ total += $1 - 512; }; END

{ print total }')↪→

152 size=$(( $size * 1020 / 1000 ))153 LANG=C tar -cpf - ${ROOTFSNAME} -X $tmpfile | pv -p -s $size | gzip >

${ROOTFSNAME}.tgz↪→

154 for LINK in $ROOTFSLINKNAME; do155 rm -f $LINK156 ln -s ${ROOTFSNAME} $LINK157 done158 }159

160

161 #162 # Main163 #164 echo "-----------------------------------------------------------------------"165 echo "Creating VNX LXC rootfs:"166 echo " Base rootfs: $BASEROOTFSNAME"167 echo " New rootfs: $ROOTFSNAME"168 echo " Rootfs link: $ROOTFSLINKNAME"169 echo " Packages to install: $PACKAGES"170 echo "-----------------------------------------------------------------------"171

172 # move to the directory where the script is located173 CDIR=`dirname $0`174 cd $CDIR175 CDIR=$(pwd)176

177 create_new_rootfs178 start_and_install_packages179 customize_rootfs180 lxc-stop -n $ROOTFSNAME -P $CDIR # Stop the VM181 rm lxc-monitord.log # Delete log of the VM182 # DO NOT create tgz file because pcap files have big sizes183 # create_rootfs_tgz184 for LINK in $ROOTFSLINKNAME; do185 rm -f $LINK

Page 110: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

92 Chapter 8. Appendix B: Root Filesystem generators

186 ln -s ${ROOTFSNAME} $LINK187 done188

189 echo "...done"190 echo "-----------------------------------------------------------------------"

8.3 create-rootfs_lagopus_raw_kvm

1

8.4 create-rootfs_lagopus_dpdk_kvm

1 #!/bin/bash2

3 #4 # Name: create-rootfs_lagopus_kvm5 #6 # Description: creates a VNX rootfs for Lagopus switch software compiled with

DPDK mode↪→

7 #8 # This file is a module part of VNX package.9 #

10 # Authors: David Fernández ([email protected])11 # Ignacio Domínguez Martínez-Casanueva ([email protected])12 # Copyright (C) 2015 DIT-UPM13 # Departamento de Ingenieria de Sistemas Telematicos14 # Universidad Politecnica de Madrid15 # SPAIN16 #17 # This program is free software; you can redistribute it and/or18 # modify it under the terms of the GNU General Public License19 # as published by the Free Software Foundation; either version 220 # of the License, or (at your option) any later version.21 #22 # This program is distributed in the hope that it will be useful,23 # but WITHOUT ANY WARRANTY; without even the implied warranty of24 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the25 # GNU General Public License for more details.26 #27 # You should have received a copy of the GNU General Public License

Page 111: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.4. create-rootfs_lagopus_dpdk_kvm 93

28 # along with this program; if not, write to the Free Software29 # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.30 #31 # An online copy of the licence can be found at

http://www.gnu.org/copyleft/gpl.html↪→

32 #33

34 # Input data:35

36 # Image to download37 IMGSRVURL=https://cloud-images.ubuntu.com/xenial/current/38 IMG=xenial-server-cloudimg-amd64-disk1.img # Ubuntu 16.04 64 bits39

40 # Name of image to create41 IMG2=vnx_rootfs_kvm_ubuntu64-16.04-v025-lago42 IMG2LINK=rootfs_kvm_ubuntu64-lagopus43

44 # Packages to install in new rootfs45 PACKAGES="aptsh traceroute ntp curl man ubuntu-cloud-keyring"46

47 # Commands to execute after package installation (one per line)48 COMMANDS=$(cat <<EOF49 # Setup locale settings to en_US50 update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-851 apt-get update52 apt-get -y dist-upgrade53 # Install necessary packages54 apt-get -y install build-essential libexpat-dev libgmp-dev \55 libssl-dev libpcap-dev byacc flex git \56 python-dev python-pastedeploy python-paste python-twisted57 # Download lagopus source code58 git clone -b v0.2.10 --recursive https://github.com/lagopus/lagopus.git

/root/lagopus↪→

59 # Overwrite lagopus code with SDN-Mon source code60 cp -r /root/SDN-Mon/* /root/lagopus/61 # Compile and install Lagopus software switch with DPDK mode62 cd /root/lagopus63 ./configure64 make65 make install66 # Initialize Lagopus DSL config folder by copying sample configuration file67 mkdir /usr/local/etc/lagopus/68 cp /root/lagopus/misc/examples/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl69 # Setup Hugepages for DPDK70 echo "vm.nr_hugepages = 256" >> /etc/sysctl.conf

Page 112: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

94 Chapter 8. Appendix B: Root Filesystem generators

71 mkdir -p /mnt/huge72 echo "nodev /mnt/huge hugetlbfs defaults 0 0" >> /etc/fstab73 # Allow ssh root login74 sed -i -e 's/^PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config75 # Update grub76 update-grub77 EOF78 )79

80 #81 # Do not modify under this line (or do it with care...)82 #83

84 TMPDIR=$( mktemp -d -t 'vnx-XXXXX' )85

86 #87 # Create config file88 #89 function create-cloud-config-file {90

91 cat > $TMPDIR/vnx-customize-data <<EOF92 #cloud-config93 manage_etc_hosts: True94 hostname: vnx95 password: xxxx96 chpasswd: { expire: False }97 groups:98 - vnx99 users:

100 - default101 - name: vnx102 gecos: VNX103 primary-group: vnx104 groups: sudo105 chpasswd:106 list: |107 vnx:xxxx108 root:xxxx109 expire: False110 ssh_pwauth: True111 # Update system and install VNXACE dependencies112 apt_update: true113 apt_upgrade: true114 packages:115 - libxml-libxml-perl

Page 113: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.4. create-rootfs_lagopus_dpdk_kvm 95

116 - libnetaddr-ip-perl117 - acpid118 - mpack119 EOF120

121 # Add aditional packages122 for p in $PACKAGES; do123 echo " - $p" >> $TMPDIR/vnx-customize-data124 done125

126 # Add additional commands127 if [ "$COMMANDS" ]; then128 echo "cc_ready_cmd:" >> $TMPDIR/vnx-customize-data129 echo "$COMMANDS" | while read c; do130 if [ "$c" ]; then131 echo " - $c" >> $TMPDIR/vnx-customize-data132 fi133 done134 fi135

136 }137

138

139 #140 # Create install vnxaced script141 #142 function create-install-vnxaced-script {143

144 cat > $TMPDIR/install-vnxaced <<EOF145 #!/bin/bash146 # Redirect script STDOUT and STDERR to console and log file147 # (commented cause it does not work...)148 #LOG=/var/log/install-vnxaced.log149 #CONSOLE=/dev/tty1150 #exec > >(tee $CONSOLE | tee -a $LOG)151 #exec 2> >(tee $CONSOLE | tee -a $LOG >&2)152 USERDATAFILE=\$( find /var/lib/cloud -name user-data.txt )153 echo \$USERDATAFILE154 cd /tmp155 munpack \$USERDATAFILE156 tar xfvz vnx-aced-lf*.tgz157 # Uncompress SDN-Mon code158 tar xfvz sdnmon-source.tgz -C /root/159 perl vnx-aced-lf-*/install_vnxaced160 # Configure serial console on ttyS0

Page 114: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

96 Chapter 8. Appendix B: Root Filesystem generators

161 #cd /etc/init162 #cp tty1.conf ttyS0.conf163 #sed -i -e 's/tty1/ttyS0/' ttyS0.conf164 # Eliminate cloud-init package165 apt-get purge --auto-remove -y cloud-init cloud-guest-utils166 apt-get purge --auto-remove -y open-vm-tools167 # Disable cloud-init adding 'ds=nocloud' kernel parameter168 #sed -i -e 's/\(GRUB_CMDLINE_LINUX_DEFAULT=.*\)"/\1 ds=nocloud ds=nocloud-net"/'

/etc/default/grub↪→

169 #sed -i -e 's/\(GRUB_CMDLINE_LINUX=.*\)"/\1 ds=nocloud ds=nocloud-net"/'/etc/default/grub↪→

170 #update-grub171 sed -i -e 's/\(GRUB_CMDLINE_LINUX=.*\)"/\1 net.ifnames=0 biosdevname=0 ds=nocloud

ds=nocloud-net"/' /etc/default/grub↪→

172 #update-grub173 echo "VER=0.25" >> /etc/vnx_rootfs_version174 DIST=\`lsb_release -i -s\`175 VER=\`lsb_release -r -s\`176 DESC=\`lsb_release -d -s\`177 DATE=\`date\`178 echo "OS=\$DIST \$VER" >> /etc/vnx_rootfs_version179 echo "DESC=\$DESC" >> /etc/vnx_rootfs_version180 echo "MODDATE=\$DATE" >> /etc/vnx_rootfs_version181 echo "MODDESC=System created. Packages installed: \$PACKAGES" >>

/etc/vnx_rootfs_version↪→

182 # Execute additional commands183 #$COMMANDS184 vnx_halt -y185 EOF186

187 }188

189 #190 # Create inlcude file191 #192 #function create-include-file {193 #194 #cat > include-file <<EOF195 #include196 #file://usr/share/vnx/aced/vnx-aced-lf-2.0b.4058.tgz197 #EOF198 #199 #}200

201 #

Page 115: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

8.4. create-rootfs_lagopus_dpdk_kvm 97

202 # main203 #204

205 HLINE="----------------------------------------------------------------------------------"206 echo ""207 echo $HLINE208 echo "Virtual Networks over LinuX (VNX) -- http://www.dit.upm.es/vnx -

[email protected]"↪→

209 echo $HLINE210

211

212 # Create config files213 create-cloud-config-file214 create-install-vnxaced-script215

216 # get a fresh copy of image217 echo "--"218 echo "-- Downloading image: ${IMGSRVURL}${IMG}"219 echo "--"220 rm -fv $IMG221 wget ${IMGSRVURL}${IMG}222

223 # Convert img to qcow2 format and change name224 echo "--"225 echo "-- Converting image to qcow2 format..."226 qemu-img convert -O qcow2 $IMG ${IMG%.*}.qcow2227 mv ${IMG%.*}.qcow2 ${IMG2}.qcow2228

229 # create multi-vnx-customize-data file mime multipart file including config files230 # to copy to VM231 # include-file:text/x-include-url232 # /usr/share/vnx/aced/vnx-aced-lf-2.0b.4058.tgz:application/octet-stream233 # ../../../sdnmon-source.tgz:application/octet-stream \234 echo "--"235 echo "-- Creating iso customization disk..."236 write-mime-multipart --output=$TMPDIR/multi-vnx-customize-data \237 $TMPDIR/vnx-customize-data:text/cloud-config \238 /usr/share/vnx/aced/vnx-aced-lf-latest.tgz:application/octet-stream \239 ../../../sdnmon-source.tgz:application/octet-stream \240 $TMPDIR/install-vnxaced:text/x-shellscript241

242 # Create iso disk with customization data243 cloud-localds $TMPDIR/vnx-customize-data.img $TMPDIR/multi-vnx-customize-data244 #cloud-localds vnx-customize-data.img vnx-customize-data245

Page 116: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

98 Chapter 8. Appendix B: Root Filesystem generators

246 # Start virtual machine with the customization data disk247 echo "--"248 echo "-- Starting virtual machine to configure it..."249 echo "--"250 echo "kvm -net nic -net user -hda ${IMG2}.qcow2 -hdb vnx-customize-data.img -m

512"↪→

251 kvm -net nic -net user -hda ${IMG2}.qcow2 -hdb $TMPDIR/vnx-customize-data.img -m512 -cpu host↪→

252 echo "--"253 echo "-- rootfs creation finished:"254 ls -lh ${IMG2}.qcow2255 if [ "$IMG2LINK" ]; then256 echo "--"257 echo "-- Creating symbolic link to new rootfs: $IMG2LINK"258 ln -sv ${IMG2}.qcow2 $IMG2LINK259 echo "--"260 fi261 echo $HLINE262

263 # delete temp files264 rm $TMPDIR/*-vnx-customize-data $TMPDIR/install-vnxaced

$TMPDIR/vnx-customize-data.img $IMG↪→

265 rm -rf $TMPDIR/

Page 117: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

99

Chapter 9

Appendix C: Virtual scenarios

This appendix contains the XML templates of the deployed VNX scenarios in astandalone (single hypervisor) and distributed configurations.

9.1 simple_lagopus_ryu_dpdk.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: Simple_Lagopus_Ryu_DPDK8 Description: Single-switch topology using Ryu controller and Lagopus OpenFlow

switch software running in DPDK mode↪→

9 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>10 -->11

12 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"13 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">14 <global>15 <version>2.0</version>16 <scenario_name>Simple_Lagopus_Ryu_DPDK</scenario_name>17 <automac offset="5"/>18 <vm_mgmt type="none" />19 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">20 <host_mapping />21 </vm_mgmt-->22 <vm_defaults>23 <console id="0" display="no"/>24 <console id="1" display="yes"/>25 </vm_defaults>26 </global>27

28 <!-- Network Definition -->

Page 118: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

100 Chapter 9. Appendix C: Virtual scenarios

29 <net name="MgmtNet" mode="virtual_bridge"/>30 <net name="link1" mode="virtual_bridge"/>31 <net name="link2" mode="virtual_bridge"/>32 <net name="virbr0" mode="virtual_bridge" managed="no"/>33

34 <!-- Clients -->35 <vm name="h1" type="lxc" arch="x86_64">36 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>37 <if id="1" net="link1">38 <mac>00:00:00:00:00:01</mac>39 <ipv4>10.0.0.1/24</ipv4>40 </if>41 <if id="9" net="virbr0">42 <ipv4>dhcp</ipv4>43 </if>44

45 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>46 <exec seq="on_boot" type="verbatim">47 cat /tmp/hosts >> /etc/hosts48 rm /tmp/hosts49 </exec>50

51 <exec seq="on_boot" type="verbatim">52 arp -s 10.0.0.2 00:00:00:00:00:0253 </exec>54

55 <exec seq="load-pcap" type="verbatim">56 mkdir /tmp/pcap-files57 mkdir /root/tcpreplay/pcap-files58 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap59 </exec>60 </vm>61

62 <vm name="h2" type="lxc" arch="x86_64">63 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>64 <if id="1" net="link2">65 <mac>00:00:00:00:00:02</mac>66 <ipv4>10.0.0.2/24</ipv4>67 </if>68 <if id="9" net="virbr0">69 <ipv4>dhcp</ipv4>70 </if>71

72 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>73 <exec seq="on_boot" type="verbatim">

Page 119: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.1. simple_lagopus_ryu_dpdk.xml 101

74 cat /tmp/hosts >> /etc/hosts75 rm /tmp/hosts76 </exec>77

78 <exec seq="on_boot" type="verbatim">79 arp -s 10.0.0.1 00:00:00:00:00:0180 </exec>81

82 <exec seq="load-pcap" type="verbatim">83 mkdir /tmp/pcap-files84 mkdir /root/tcpreplay/pcap-files85 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap86 </exec>87

88 </vm>89

90 <!-- Ryu Controller -->91 <vm name="c0" type="lxc" arch="x86_64">92 <filesystem type="cow">../../filesystems/rootfs_sdn</filesystem>93 <if id="1" net="MgmtNet">94 <ipv4>192.168.1.1/24</ipv4>95 </if>96 <if id="9" net="virbr0">97 <ipv4>dhcp</ipv4>98 </if>99

100 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>101 <exec seq="on_boot" type="verbatim">102 cat /tmp/hosts >> /etc/hosts103 rm /tmp/hosts104 </exec>105

106 <filetree seq="on_boot" root="/tmp/">../../conf/start-ryu.sh</filetree>107 <exec seq="on_boot" type="verbatim">108 cp /tmp/start-ryu.sh /root/start-ryu.sh109 chmod +x root/start-ryu.sh110 rm /tmp/start-ryu.sh111 </exec>112

113 <filetree seq="on_boot,load-ryu" root="/tmp/">../../../../Ryu</filetree>114 <exec seq="on_boot,load-ryu" type="verbatim">115 cp -r /tmp/Ryu/* /usr/local/lib/python2.7/dist-packages/ryu/116 rm -r /tmp/Ryu117 </exec>118

Page 120: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

102 Chapter 9. Appendix C: Virtual scenarios

119 <exec seq="start-ryu" type="verbatim"ostype="system">./root/start-ryu.sh</exec>↪→

120 </vm>121

122 <!-- Lagopus Switches -->123

124 <vm name="sw1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="4">↪→

125 <mem>4G</mem>126 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

127 <if id="1" net="MgmtNet">128 <ipv4>192.168.1.3/24</ipv4>129 </if>130 <if id="2" net="link1"/>131 <if id="3" net="link2"/>132 <if id="9" net="virbr0">133 <ipv4>dhcp</ipv4>134 </if>135 <forwarding type="ip"/>136 <forwarding type="ipv6" />137

138 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

139 <exec seq="on_boot" type="verbatim">140 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh141 mkdir /usr/local/etc/lagopus142 ./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 1 -i eth2 -i eth3143 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm

/tmp/lagopus.dsl↪→

144 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assign balance145 #lagopus -l /var/log/lagopus.log -d -- -cf -n2 -- --rx '(0,0,1),(1,0,1)'

--tx '(0,2),(1,2)' --w 3↪→

146 </exec>147

148 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>149 <exec seq="on_boot,sdnmon" type="verbatim">150 cp -r /tmp/SDN-Mon/* /root/lagopus/151 rm -r /tmp/SDN-Mon152 </exec>153

154 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

155 <exec seq="on_boot,load-modules" type="verbatim">156 cd /tmp/ &amp;&amp; ./load-modules.sh

Page 121: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.2. double_lagopus_ryu_dpdk.xml 103

157 cd /root/lagopus158 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

159 </exec>160 </vm>161

162 <!-- Host -->163

164 <host>165 <hostif net="MgmtNet">166 <ipv4>192.168.1.2/24</ipv4>167 </hostif>168 </host>169 </vnx>

9.2 double_lagopus_ryu_dpdk.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: Double_Lagopus_Ryu_DPDK8 Description: Simple topology using Ryu controller and two Lagopus OpenFlow

switches running in DPDK mode↪→

9 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>10 -->11

12 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"13 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">14 <global>15 <version>2.0</version>16 <scenario_name>Double_Lagopus_Ryu_DPDK</scenario_name>17 <automac/>18 <vm_mgmt type="none" />19 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">20 <host_mapping />21 </vm_mgmt-->22 <vm_defaults>23 <console id="0" display="no"/>24 <console id="1" display="yes"/>25 </vm_defaults>

Page 122: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

104 Chapter 9. Appendix C: Virtual scenarios

26 </global>27

28 <!-- Network Definition -->29

30 <net name="MgmtNet" mode="virtual_bridge"/>31 <net name="sw1-h1" mode="virtual_bridge"/>32 <net name="link12" mode="virtual_bridge"/>33 <net name="sw2-h2" mode="virtual_bridge"/>34 <net name="virbr0" mode="virtual_bridge" managed="no"/>35

36 <!-- Clients -->37

38 <vm name="h1" type="lxc" arch="x86_64">39 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>40 <if id="1" net="sw1-h1">41 <mac>00:00:00:00:00:01</mac>42 <ipv4>10.0.0.1/24</ipv4>43 </if>44 <if id="9" net="virbr0">45 <ipv4>dhcp</ipv4>46 </if>47

48 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>49 <exec seq="on_boot" type="verbatim">50 cat /tmp/hosts >> /etc/hosts51 rm /tmp/hosts52 </exec>53

54 <exec seq="on_boot" type="verbatim">55 arp -s 10.0.0.2 00:00:00:00:00:0256 </exec>57

58 <exec seq="load-pcap" type="verbatim">59 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap60 </exec>61 </vm>62

63 <vm name="h2" type="lxc" arch="x86_64">64 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>65 <if id="1" net="sw2-h2">66 <mac>00:00:00:00:00:02</mac>67 <ipv4>10.0.0.2/24</ipv4>68 </if>69 <if id="9" net="virbr0">70 <ipv4>dhcp</ipv4>

Page 123: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.2. double_lagopus_ryu_dpdk.xml 105

71 </if>72

73 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>74 <exec seq="on_boot" type="verbatim">75 cat /tmp/hosts >> /etc/hosts76 rm /tmp/hosts77 </exec>78

79 <exec seq="on_boot" type="verbatim">80 arp -s 10.0.0.1 00:00:00:00:00:0181 </exec>82

83 <exec seq="load-pcap" type="verbatim">84 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap85 </exec>86 </vm>87

88 <!-- Ryu Controller -->89

90 <vm name="c0" type="lxc" arch="x86_64">91 <filesystem type="cow">../../filesystems/rootfs_sdn</filesystem>92 <if id="1" net="MgmtNet">93 <ipv4>192.168.1.1/24</ipv4>94 </if>95 <if id="9" net="virbr0">96 <ipv4>dhcp</ipv4>97 </if>98

99 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>100 <exec seq="on_boot" type="verbatim">101 cat /tmp/hosts >> /etc/hosts102 rm /tmp/hosts103 </exec>104

105 <filetree seq="on_boot" root="/tmp/">../../conf/start-ryu.sh</filetree>106 <exec seq="on_boot" type="verbatim">107 cp /tmp/start-ryu.sh /root/start-ryu.sh108 chmod +x root/start-ryu.sh109 rm /tmp/start-ryu.sh110 </exec>111

112 <filetree seq="on_boot,load-ryu" root="/tmp/">../../../../Ryu</filetree>113 <exec seq="on_boot,load-ryu" type="verbatim">114 cp -r /tmp/Ryu/* /usr/local/lib/python2.7/dist-packages/ryu/115 rm -r /tmp/Ryu

Page 124: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

106 Chapter 9. Appendix C: Virtual scenarios

116 </exec>117

118 <exec seq="start-ryu" type="verbatim"ostype="exec">./root/start-ryu.sh</exec>↪→

119 </vm>120

121 <!-- Lagopus Switches -->122

123 <vm name="sw1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

124 <mem>2G</mem>125 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

126 <if id="1" net="MgmtNet">127 <ipv4>192.168.1.11/24</ipv4>128 </if>129 <if id="2" net="sw1-h1"/>130 <if id="3" net="link12"/>131 <if id="9" net="virbr0">132 <ipv4>dhcp</ipv4>133 </if>134

135 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>136 <exec seq="on_boot" type="verbatim">137 cat /tmp/hosts >> /etc/hosts138 rm /tmp/hosts139 </exec>140

141 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

142 <exec seq="on_boot" type="verbatim">143 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh144 mkdir /usr/local/etc/lagopus145 ./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 1 -i eth2 -i eth3146 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm

/tmp/lagopus.dsl↪→

147 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assign balance148 </exec>149

150 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>151 <exec seq="on_boot,sdnmon" type="verbatim">152 cp -r /tmp/SDN-Mon/* /root/lagopus/153 rm -r /tmp/SDN-Mon154 </exec>155

Page 125: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.2. double_lagopus_ryu_dpdk.xml 107

156 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

157 <exec seq="on_boot,load-modules" type="verbatim">158 cd /tmp/ &amp;&amp; ./load-modules.sh159 cd /root/lagopus160 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

161 </exec>162 </vm>163

164 <vm name="sw2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

165 <mem>2G</mem>166 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

167

168 <if id="1" net="MgmtNet">169 <ipv4>192.168.1.12/24</ipv4>170 </if>171 <if id="2" net="link12"/>172 <if id="3" net="sw2-h2"/>173 <if id="9" net="virbr0">174 <ipv4>dhcp</ipv4>175 </if>176

177 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>178 <exec seq="on_boot" type="verbatim">179 cat /tmp/hosts >> /etc/hosts180 rm /tmp/hosts181 </exec>182

183 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

184 <exec seq="on_boot" type="verbatim">185 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh186 mkdir /usr/local/etc/lagopus187 ./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 2 -i eth2 -i eth3188 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm

/tmp/lagopus.dsl↪→

189 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assign balance190 </exec>191

192 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>193 <exec seq="on_boot,sdnmon" type="verbatim">194 cp -r /tmp/SDN-Mon/* /root/lagopus/

Page 126: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

108 Chapter 9. Appendix C: Virtual scenarios

195 rm -r /tmp/SDN-Mon196 </exec>197

198 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

199 <exec seq="on_boot,load-modules" type="verbatim">200 cd /tmp/ &amp;&amp; ./load-modules.sh201 cd /root/lagopus202 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

203 </exec>204 </vm>205

206 <!-- Host -->207

208 <host>209 <hostif net="MgmtNet">210 <ipv4>192.168.1.2/24</ipv4>211 </hostif>212 </host>213

214 </vnx>

9.3 cascade_lagopus_ryu_dpdk.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: Cascade_Lagopus_Ryu_DPDK8 Description: Cascade topology using Ryu controller and Lagopus OpenFlow switch

software running in DPDK mode↪→

9 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>10 -->11

12 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"13 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">14 <global>15 <version>2.0</version>16 <scenario_name>Cascade_Lagopus_Ryu_DPDK</scenario_name>17 <automac offset="5"/>

Page 127: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.3. cascade_lagopus_ryu_dpdk.xml 109

18 <vm_mgmt type="none" />19 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">20 <host_mapping />21 </vm_mgmt-->22 <vm_defaults>23 <console id="0" display="no"/>24 <console id="1" display="yes"/>25 </vm_defaults>26 </global>27

28 <!-- Network Definition -->29

30 <net name="MgmtNet" mode="virtual_bridge"/>31 <net name="link12" mode="virtual_bridge"/>32 <net name="link23" mode="virtual_bridge"/>33 <net name="sw1-h1" mode="virtual_bridge"/>34 <net name="sw3-h2" mode="virtual_bridge"/>35 <net name="virbr0" mode="virtual_bridge" managed="no"/>36

37 <!-- Clients -->38

39 <vm name="h1" type="lxc" arch="x86_64">40 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>41 <if id="1" net="sw1-h1">42 <mac>00:00:00:00:00:01</mac>43 <ipv4>10.0.0.1/24</ipv4>44 </if>45 <if id="9" net="virbr0">46 <ipv4>dhcp</ipv4>47 </if>48

49 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>50 <exec seq="on_boot" type="verbatim">51 cat /tmp/hosts >> /etc/hosts52 rm /tmp/hosts53 </exec>54

55 <exec seq="on_boot" type="verbatim">56 arp -s 10.0.0.2 00:00:00:00:00:0257 </exec>58

59 <exec seq="load-pcap" type="verbatim">60 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap61 </exec>62 </vm>

Page 128: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

110 Chapter 9. Appendix C: Virtual scenarios

63

64 <vm name="h2" type="lxc" arch="x86_64">65 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>66 <if id="1" net="sw3-h2">67 <mac>00:00:00:00:00:02</mac>68 <ipv4>10.0.0.2/24</ipv4>69 </if>70 <if id="9" net="virbr0">71 <ipv4>dhcp</ipv4>72 </if>73

74 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>75 <exec seq="on_boot" type="verbatim">76 cat /tmp/hosts >> /etc/hosts77 rm /tmp/hosts78 </exec>79

80 <exec seq="on_boot" type="verbatim">81 arp -s 10.0.0.1 00:00:00:00:00:0182 </exec>83

84 </vm>85

86 <!-- Ryu Controller -->87

88 <vm name="c0" type="lxc" arch="x86_64">89 <filesystem type="cow">../../filesystems/rootfs_sdn</filesystem>90 <if id="1" net="MgmtNet">91 <ipv4>192.168.1.1/24</ipv4>92 </if>93 <if id="9" net="virbr0">94 <ipv4>dhcp</ipv4>95 </if>96

97 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>98 <exec seq="on_boot" type="verbatim">99 cat /tmp/hosts >> /etc/hosts

100 rm /tmp/hosts101 </exec>102

103 <filetree seq="on_boot" root="/tmp/">../../conf/start-ryu.sh</filetree>104 <exec seq="on_boot" type="verbatim">105 cp /tmp/start-ryu.sh /root/start-ryu.sh106 chmod +x root/start-ryu.sh107 rm /tmp/start-ryu.sh

Page 129: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.3. cascade_lagopus_ryu_dpdk.xml 111

108 </exec>109

110 <filetree seq="on_boot,load-ryu" root="/tmp/">../../../../Ryu</filetree>111 <exec seq="on_boot,load-ryu" type="verbatim">112 cp -r /tmp/Ryu/* /usr/local/lib/python2.7/dist-packages/ryu/113 rm -r /tmp/Ryu114 </exec>115

116 <exec seq="start-ryu" type="verbatim"ostype="exec">./root/start-ryu.sh</exec>↪→

117 </vm>118

119 <!-- Lagopus Switches -->120

121 <vm name="sw1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

122 <mem>2G</mem>123 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

124 <if id="1" net="MgmtNet">125 <ipv4>192.168.1.11/24</ipv4>126 </if>127 <if id="2" net="sw1-h1"/>128 <if id="3" net="link12"/>129 <if id="9" net="virbr0">130 <ipv4>dhcp</ipv4>131 </if>132

133 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>134 <exec seq="on_boot" type="verbatim">135 cat /tmp/hosts >> /etc/hosts136 rm /tmp/hosts137 </exec>138

139 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

140 <exec seq="on_boot" type="verbatim">141 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh142 mkdir /usr/local/etc/lagopus143 ./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 1 -i eth2 -i eth3144 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm

/tmp/lagopus.dsl↪→

145 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assignbalance↪→

146 </exec>

Page 130: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

112 Chapter 9. Appendix C: Virtual scenarios

147

148 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>149 <exec seq="on_boot,sdnmon" type="verbatim">150 cp -r /tmp/SDN-Mon/* /root/lagopus/151 rm -r /tmp/SDN-Mon152 </exec>153

154 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

155 <exec seq="on_boot,load-modules" type="verbatim">156 cd /tmp/ &amp;&amp; ./load-modules.sh157 cd /root/lagopus158 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

159 </exec>160 </vm>161

162 <vm name="sw2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

163 <mem>2G</mem>164 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

165 <if id="1" net="MgmtNet">166 <ipv4>192.168.1.12/24</ipv4>167 </if>168 <if id="2" net="link12"/>169 <if id="3" net="link23"/>170 <if id="9" net="virbr0">171 <ipv4>dhcp</ipv4>172 </if>173

174 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>175 <exec seq="on_boot" type="verbatim">176 cat /tmp/hosts >> /etc/hosts177 rm /tmp/hosts178 </exec>179

180 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

181 <filetree seq="on_boot"root="/tmp/">../../conf/install-lagopus-dpdk.sh</filetree>↪→

182 <exec seq="on_boot" type="verbatim">183 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh184 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh185 mkdir /usr/local/etc/lagopus

Page 131: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.3. cascade_lagopus_ryu_dpdk.xml 113

186 ./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 2 -i eth2 -i eth3187 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm

/tmp/lagopus.dsl↪→

188 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assignbalance↪→

189 </exec>190

191 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>192 <exec seq="on_boot,sdnmon" type="verbatim">193 cp -r /tmp/SDN-Mon/* /root/lagopus/194 rm -r /tmp/SDN-Mon195 </exec>196

197 <exec seq="install-lagopus" type="verbatim">198 cd /root &amp;&amp; ./install-lagopus-dpdk.sh199 </exec>200

201 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

202 <exec seq="on_boot,load-modules" type="verbatim">203 cd /tmp/ &amp;&amp; ./load-modules.sh204 cd /root/lagopus205 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

206 </exec>207 </vm>208

209 <vm name="sw3" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

210 <mem>2G</mem>211 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

212 <if id="1" net="MgmtNet">213 <ipv4>192.168.1.13/24</ipv4>214 </if>215 <if id="2" net="link23"/>216 <if id="3" net="sw3-h2"/>217 <if id="9" net="virbr0">218 <ipv4>dhcp</ipv4>219 </if>220

221 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>222 <exec seq="on_boot" type="verbatim">223 cat /tmp/hosts >> /etc/hosts224 rm /tmp/hosts

Page 132: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

114 Chapter 9. Appendix C: Virtual scenarios

225 </exec>226

227 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

228 <exec seq="on_boot" type="verbatim">229 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh230 mkdir /usr/local/etc/lagopus231 ./root/start-lagopus-dpdk.sh -c tcp:192.168.1.1:6633 -d 3 -i eth2 -i eth3232 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm

/tmp/lagopus.dsl↪→

233 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assignbalance↪→

234 </exec>235

236 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>237 <exec seq="on_boot,sdnmon" type="verbatim">238 cp -r /tmp/SDN-Mon/* /root/lagopus/239 rm -r /tmp/SDN-Mon240 </exec>241

242 <exec seq="install-lagopus" type="verbatim">243 cd /root &amp;&amp; ./install-lagopus-dpdk.sh244 </exec>245

246 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

247 <exec seq="on_boot,load-modules" type="verbatim">248 cd /tmp/ &amp;&amp; ./load-modules.sh249 cd /root/lagopus250 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

251 </exec>252 </vm>253

254 <!-- Host -->255

256 <host>257 <hostif net="MgmtNet">258 <ipv4>192.168.1.2/24</ipv4>259 </hostif>260 </host>261

262 </vnx>

Page 133: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.4. cascade_lagopus_ryu_dpdk_external_busy.xml 115

9.4 cascade_lagopus_ryu_dpdk_external_busy.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: Cascade_Lagopus_Ryu_DPDK_External_Busy8 Description: Cascade topology using Lagopus OpenFlow switch software running in

DPDK mode↪→

9 In this scenario the Ryu controller runs on an external server10 Extra h3 and h4 clients are connected to sw2 switch to make it busy

with crossing traffic↪→

11 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>12 -->13

14 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"15 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">16 <global>17 <version>2.0</version>18 <scenario_name>Cascade_Lagopus_Ryu_DPDK_External_Busy</scenario_name>19 <automac offset="5"/>20 <vm_mgmt type="none" />21 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">22 <host_mapping />23 </vm_mgmt-->24 <vm_defaults>25 <console id="0" display="no"/>26 <console id="1" display="yes"/>27 </vm_defaults>28 </global>29

30 <!-- Network Definition -->31

32 <net name="virbr0" mode="virtual_bridge" managed="no"/>33 <net name="link12" mode="virtual_bridge"/>34 <net name="link23" mode="virtual_bridge"/>35 <net name="sw1-h1" mode="virtual_bridge"/>36 <net name="sw2-h3" mode="virtual_bridge"/>37 <net name="sw2-h4" mode="virtual_bridge"/>38 <net name="sw3-h2" mode="virtual_bridge"/>39

40 <!-- Clients -->41

Page 134: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

116 Chapter 9. Appendix C: Virtual scenarios

42 <vm name="h1" type="lxc" arch="x86_64">43 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>44 <if id="1" net="sw1-h1">45 <mac>00:00:00:00:00:01</mac>46 <ipv4>10.0.0.1/24</ipv4>47 </if>48 <if id="9" net="virbr0">49 <ipv4>dhcp</ipv4>50 </if>51

52 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>53 <exec seq="on_boot" type="verbatim">54 cat /tmp/hosts >> /etc/hosts55 rm /tmp/hosts56 </exec>57

58 <exec seq="on_boot" type="verbatim">59 arp -s 10.0.0.2 00:00:00:00:00:0260 arp -s 10.0.0.3 00:00:00:00:00:0361 arp -s 10.0.0.4 00:00:00:00:00:0462 </exec>63

64 <exec seq="load-pcap" type="verbatim">65 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap66 </exec>67 </vm>68

69 <vm name="h2" type="lxc" arch="x86_64">70 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>71 <if id="1" net="sw3-h2">72 <mac>00:00:00:00:00:02</mac>73 <ipv4>10.0.0.2/24</ipv4>74 </if>75 <if id="9" net="virbr0">76 <ipv4>dhcp</ipv4>77 </if>78

79 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>80 <exec seq="on_boot" type="verbatim">81 cat /tmp/hosts >> /etc/hosts82 rm /tmp/hosts83 </exec>84

85 <exec seq="on_boot" type="verbatim">86 arp -s 10.0.0.1 00:00:00:00:00:01

Page 135: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.4. cascade_lagopus_ryu_dpdk_external_busy.xml 117

87 arp -s 10.0.0.3 00:00:00:00:00:0388 arp -s 10.0.0.4 00:00:00:00:00:0489 </exec>90

91 <exec seq="load-pcap" type="verbatim">92 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap93 </exec>94

95 </vm>96

97 <vm name="h3" type="lxc" arch="x86_64">98 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>99 <if id="1" net="sw2-h3">

100 <mac>00:00:00:00:00:03</mac>101 <ipv4>10.0.0.3/24</ipv4>102 </if>103 <if id="9" net="virbr0">104 <ipv4>dhcp</ipv4>105 </if>106

107 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>108 <exec seq="on_boot" type="verbatim">109 cat /tmp/hosts >> /etc/hosts110 rm /tmp/hosts111 </exec>112

113 <exec seq="on_boot" type="verbatim">114 arp -s 10.0.0.1 00:00:00:00:00:01115 arp -s 10.0.0.2 00:00:00:00:00:02116 arp -s 10.0.0.4 00:00:00:00:00:04117 </exec>118

119 <exec seq="load-pcap" type="verbatim">120 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap121 </exec>122 </vm>123

124 <vm name="h4" type="lxc" arch="x86_64">125 <filesystem type="cow">../../filesystems/rootfs_lxc</filesystem>126 <if id="1" net="sw2-h4">127 <mac>00:00:00:00:00:04</mac>128 <ipv4>10.0.0.4/24</ipv4>129 </if>130 <if id="9" net="virbr0">131 <ipv4>dhcp</ipv4>

Page 136: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

118 Chapter 9. Appendix C: Virtual scenarios

132 </if>133

134 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>135 <exec seq="on_boot" type="verbatim">136 cat /tmp/hosts >> /etc/hosts137 rm /tmp/hosts138 </exec>139

140 <exec seq="on_boot" type="verbatim">141 arp -s 10.0.0.1 00:00:00:00:00:01142 arp -s 10.0.0.2 00:00:00:00:00:02143 arp -s 10.0.0.3 00:00:00:00:00:03144 </exec>145

146 <exec seq="load-pcap" type="verbatim">147 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap148 </exec>149 </vm>150

151 <!-- Lagopus Switches -->152

153 <vm name="sw1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

154 <mem>2G</mem>155 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

156 <if id="1" net="virbr0">157 <ipv4>dhcp</ipv4>158 </if>159 <if id="2" net="sw1-h1"/>160 <if id="3" net="link12"/>161

162 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>163 <exec seq="on_boot" type="verbatim">164 cat /tmp/hosts >> /etc/hosts165 rm /tmp/hosts166 </exec>167

168 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

169 <exec seq="on_boot" type="verbatim">170 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh171 mkdir /usr/local/etc/lagopus172 ./root/start-lagopus-dpdk.sh -c tcp:136.187.82.138:6633 -d 1 -i eth2 -i

eth3↪→

Page 137: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.4. cascade_lagopus_ryu_dpdk_external_busy.xml 119

173 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

174 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assignbalance↪→

175 </exec>176

177 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>178 <exec seq="on_boot,sdnmon" type="verbatim">179 cp -r /tmp/SDN-Mon/* /root/lagopus/180 rm -r /tmp/SDN-Mon181 </exec>182

183 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

184 <exec seq="on_boot,load-modules" type="verbatim">185 cd /tmp/ &amp;&amp; ./load-modules.sh186 cd /root/lagopus187 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

188 </exec>189 </vm>190

191 <vm name="sw2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

192 <mem>2G</mem>193 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

194 <if id="1" net="virbr0">195 <ipv4>dhcp</ipv4>196 </if>197 <if id="2" net="link12"/>198 <if id="3" net="sw2-h3"/>199 <if id="4" net="sw2-h4"/>200 <if id="5" net="link23"/>201

202 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>203 <exec seq="on_boot" type="verbatim">204 cat /tmp/hosts >> /etc/hosts205 rm /tmp/hosts206 </exec>207

208 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

209 <exec seq="on_boot" type="verbatim">210 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh

Page 138: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

120 Chapter 9. Appendix C: Virtual scenarios

211 mkdir /usr/local/etc/lagopus212 ./root/start-lagopus-dpdk.sh -c tcp:136.187.82.138:6633 -d 2 -i eth2 -i

eth3 -i eth4 -i eth5↪→

213 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

214 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assignbalance↪→

215 </exec>216

217 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>218 <exec seq="on_boot,sdnmon" type="verbatim">219 cp -r /tmp/SDN-Mon/* /root/lagopus/220 rm -r /tmp/SDN-Mon221 </exec>222

223 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

224 <exec seq="on_boot,load-modules" type="verbatim">225 cd /tmp/ &amp;&amp; ./load-modules.sh226 cd /root/lagopus227 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0 0000:00:06.0 0000:00:07.0↪→

228 </exec>229 </vm>230

231 <vm name="sw3" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

232 <mem>2G</mem>233 <filesystem

type="cow">../../filesystems/rootfs_kvm_ubuntu64-lagopus</filesystem>↪→

234 <if id="1" net="virbr0">235 <ipv4>dhcp</ipv4>236 </if>237 <if id="2" net="link23"/>238 <if id="3" net="sw3-h2"/>239

240 <filetree seq="on_boot" root="/tmp/">../../conf/hosts</filetree>241 <exec seq="on_boot" type="verbatim">242 cat /tmp/hosts >> /etc/hosts243 rm /tmp/hosts244 </exec>245

246 <filetree seq="on_boot"root="/tmp/">../../conf/start-lagopus-dpdk.sh</filetree>↪→

247 <exec seq="on_boot" type="verbatim">

Page 139: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.5. cascade-sw12-cnt.xml 121

248 cp /tmp/start-lagopus-dpdk.sh /root/start-lagopus-dpdk.sh249 mkdir /usr/local/etc/lagopus250 ./root/start-lagopus-dpdk.sh -c tcp:136.187.82.138:6633 -d 3 -i eth2 -i

eth3↪→

251 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

252 #lagopus -l /var/log/lagopus.log -d -- -c3 -n2 -- -p3 --core-assignbalance↪→

253 </exec>254

255 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../../SDN-Mon</filetree>256 <exec seq="on_boot,sdnmon" type="verbatim">257 cp -r /tmp/SDN-Mon/* /root/lagopus/258 rm -r /tmp/SDN-Mon259 </exec>260

261 <filetree seq="on_boot,load-modules"root="/tmp/">../../conf/load-modules.sh</filetree>↪→

262 <exec seq="on_boot,load-modules" type="verbatim">263 cd /tmp/ &amp;&amp; ./load-modules.sh264 cd /root/lagopus265 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

266 </exec>267 </vm>268

269 </vnx>

9.5 cascade-sw12-cnt.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: cascade-sw12-cnt8 Description: Subscenario of double-switch cascade topology.9 This subscenario plays the main role as it runs the Ryu controller

c0.↪→

10 It also includes first and second Lagopus switches sw1,sw2 andtheir associated clients h1,h2.↪→

11 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>

Page 140: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

122 Chapter 9. Appendix C: Virtual scenarios

12 -->13

14 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"15 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">16 <global>17 <version>2.0</version>18 <scenario_name>cascade-sw12-cnt</scenario_name>19 <automac offset="0"/>20 <vm_mgmt type="none" />21 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">22 <host_mapping />23 </vm_mgmt-->24 <vm_defaults>25 <console id="0" display="no"/>26 <console id="1" display="yes"/>27 </vm_defaults>28 <cmd-seq seq='load-hosts'>tcpreplay,bmon</cmd-seq>29 </global>30

31 <!-- Network Definition -->32 <net name="MgmtNet" mode="openvswitch" mtu="1450"/>33 <net name="link23" mode="openvswitch" mtu="1450"/>34 <net name="link12" mode="virtual_bridge"/>35 <net name="sw1-h1" mode="virtual_bridge"/>36 <net name="sw2-h2" mode="virtual_bridge"/>37 <net name="virbr0" mode="virtual_bridge" managed="no"/>38

39 <!-- Clients -->40 <vm name="h1" type="lxc" arch="x86_64">41 <filesystem type="cow">../filesystems/rootfs_lxc</filesystem>42 <if id="1" net="sw1-h1">43 <mac>00:00:00:00:00:01</mac>44 <ipv4>10.0.0.1/24</ipv4>45 </if>46 <if id="9" net="virbr0">47 <ipv4>dhcp</ipv4>48 </if>49

50 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>51 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>52 <exec seq="on_boot" type="verbatim">53 cat /tmp/hosts >> /etc/hosts54 rm /tmp/hosts55 </exec>56

Page 141: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.5. cascade-sw12-cnt.xml 123

57 <exec seq="on_boot" type="verbatim">58 arp -s 10.0.0.2 00:00:00:00:00:0259 arp -s 10.0.0.3 00:00:00:00:00:0360 arp -s 10.0.0.4 00:00:00:00:00:0461 </exec>62

63 <exec seq="on_boot" type="verbatim">64 # Change MgmtNet and TunnNet interfaces MTU65 ifconfig eth1 mtu 145066 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces67 </exec>68

69 <exec seq="tcpreplay" type="verbatim">70 cd /root/tcpreplay71 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04

--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/test.pcap--outfile=test-modified.pcap

↪→

↪→

72 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/smallFlows.pcap--outfile=smallFlows-modified.pcap

↪→

↪→

73 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/bigFlows.pcap--outfile=bigFlows-modified.pcap

↪→

↪→

74 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/mawi-sample.pcap--outfile=mawi-sample-modified.pcap

↪→

↪→

75 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap76 </exec>77

78 <exec seq="load-pcap" type="verbatim">79 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap80 </exec>81 </vm>82

83 <vm name="h2" type="lxc" arch="x86_64">84 <filesystem type="cow">../filesystems/rootfs_lxc</filesystem>85 <if id="1" net="sw2-h2">86 <mac>00:00:00:00:00:02</mac>87 <ipv4>10.0.0.2/24</ipv4>88 </if>89 <if id="9" net="virbr0">90 <ipv4>dhcp</ipv4>91 </if>92

93 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>

Page 142: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

124 Chapter 9. Appendix C: Virtual scenarios

94 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>95 <exec seq="on_boot" type="verbatim">96 cat /tmp/hosts >> /etc/hosts97 rm /tmp/hosts98 </exec>99

100 <exec seq="on_boot" type="verbatim">101 arp -s 10.0.0.1 00:00:00:00:00:01102 arp -s 10.0.0.3 00:00:00:00:00:03103 arp -s 10.0.0.4 00:00:00:00:00:04104 </exec>105

106 <exec seq="on_boot" type="verbatim">107 # Change MgmtNet and TunnNet interfaces MTU108 ifconfig eth1 mtu 1450109 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces110 </exec>111

112 <exec seq="tcpreplay" type="verbatim">113 cd /root/tcpreplay114 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04

--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/test.pcap--outfile=test-modified.pcap

↪→

↪→

115 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/smallFlows.pcap--outfile=smallFlows-modified.pcap

↪→

↪→

116 tcprewrite --mtu=1450 --mtu-trunc --enet-dmac=00:00:00:00:00:04--dstipmap=0.0.0.0/0:10.0.0.4/32 --infile=original/bigFlows.pcap--outfile=bigFlows-modified.pcap

↪→

↪→

117 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap118 </exec>119

120 <exec seq="load-pcap" type="verbatim">121 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap122 </exec>123

124 <exec seq="bmon" type="verbatim">125 cd /root/bmon126 ./autogen.sh127 ./configure128 make129 make install130 </exec>131 </vm>132

Page 143: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.5. cascade-sw12-cnt.xml 125

133 <!-- Lagopus Switches -->134 <vm name="sw1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"

arch="x86_64" vcpu="2">↪→

135 <mem>2G</mem>136 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-lago.qcow2</filesystem>↪→

137 <if id="1" net="MgmtNet">138 <ipv4>192.168.1.11/24</ipv4>139 </if>140 <if id="2" net="sw1-h1"/>141 <if id="3" net="link12"/>142 <if id="9" net="virbr0">143 <ipv4>dhcp</ipv4>144 </if>145

146 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>147 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>148 <exec seq="on_boot" type="verbatim">149 cat /tmp/hosts >> /etc/hosts150 rm /tmp/hosts151 </exec>152

153 <exec seq="on_boot" type="verbatim">154 # Change MgmtNet and TunnNet interfaces MTU155 ifconfig eth2 mtu 1450156 sed -i -e '/iface eth2 inet manual/a \ mtu 1450' /etc/network/interfaces157 ifconfig eth3 mtu 1450158 sed -i -e '/iface eth3 inet manual/a \ mtu 1450' /etc/network/interfaces159 </exec>160

161 <filetree seq="on_boot"root="/tmp/">../conf/start-lagopus-dpdk-dist.sh</filetree>↪→

162 <filetree seq="on_boot"root="/tmp/">../conf/install-lagopus-dpdk.sh</filetree>↪→

163 <exec seq="on_boot" type="verbatim">164 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh165 cp /tmp/start-lagopus-dpdk-dist.sh /root/start-lagopus-dpdk-dist.sh166 mkdir /usr/local/etc/lagopus167 ./root/start-lagopus-dpdk-dist.sh -c tcp:192.168.1.1:6633 -d 1 -i eth2 -i

eth3↪→

168 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

169 #lagopus -l /var/log/lagopus.log -d -- -c3 -n1 -- -p3f --core-assignbalance↪→

170 </exec>

Page 144: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

126 Chapter 9. Appendix C: Virtual scenarios

171

172 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../SDN-Mon</filetree>173 <exec seq="on_boot,sdnmon" type="verbatim">174 cp -r /tmp/SDN-Mon/* /root/lagopus/175 rm -r /tmp/SDN-Mon176 </exec>177

178 <exec seq="install-lagopus" type="verbatim">179 cd /root &amp;&amp; ./install-lagopus-dpdk.sh180 </exec>181

182 <filetree seq="load-modules" root="/tmp/">../conf/load-modules.sh</filetree>183 <exec seq="load-modules" type="verbatim">184 cd /tmp/ &amp;&amp; ./load-modules.sh185 cd /root/lagopus186 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

187 </exec>188 </vm>189

190 <vm name="sw2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

191 <mem>2G</mem>192 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-lago.qcow2</filesystem>↪→

193 <if id="1" net="MgmtNet">194 <ipv4>192.168.1.12/24</ipv4>195 </if>196 <if id="2" net="link12"/>197 <if id="3" net="sw2-h2"/>198 <if id="4" net="link23"/>199 <if id="9" net="virbr0">200 <ipv4>dhcp</ipv4>201 </if>202

203 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>204 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>205 <exec seq="on_boot" type="verbatim">206 cat /tmp/hosts >> /etc/hosts207 rm /tmp/hosts208 </exec>209

210 <exec seq="on_boot" type="verbatim">211 # Change MgmtNet and TunnNet interfaces MTU212 ifconfig eth2 mtu 1450

Page 145: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.5. cascade-sw12-cnt.xml 127

213 sed -i -e '/iface eth2 inet manual/a \ mtu 1450' /etc/network/interfaces214 ifconfig eth3 mtu 1450215 sed -i -e '/iface eth3 inet manual/a \ mtu 1450' /etc/network/interfaces216 ifconfig eth4 mtu 1450217 sed -i -e '/iface eth4 inet manual/a \ mtu 1450' /etc/network/interfaces218 </exec>219

220 <filetree seq="on_boot"root="/tmp/">../conf/start-lagopus-dpdk-dist.sh</filetree>↪→

221 <filetree seq="on_boot"root="/tmp/">../conf/install-lagopus-dpdk.sh</filetree>↪→

222 <exec seq="on_boot" type="verbatim">223 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh224 cp /tmp/start-lagopus-dpdk-dist.sh /root/start-lagopus-dpdk-dist.sh225 mkdir /usr/local/etc/lagopus226 ./root/start-lagopus-dpdk-dist.sh -c tcp:192.168.1.1:6633 -d 2 -i eth2 -i

eth3 -i eth4↪→

227 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

228 #lagopus -l /var/log/lagopus.log -d -- -c3 -n1 -- -p3f --core-assignbalance↪→

229 </exec>230

231 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../SDN-Mon</filetree>232 <exec seq="on_boot,sdnmon" type="verbatim">233 cp -r /tmp/SDN-Mon/* /root/lagopus/234 rm -r /tmp/SDN-Mon235 </exec>236

237 <exec seq="install-lagopus" type="verbatim">238 cd /root &amp;&amp; ./install-lagopus-dpdk.sh239 </exec>240

241 <filetree seq="load-modules" root="/tmp/">../conf/load-modules.sh</filetree>242 <exec seq="load-modules" type="verbatim">243 cd /tmp/ &amp;&amp; ./load-modules.sh244 cd /root/lagopus245 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0 0000:00:06.0↪→

246 </exec>247 </vm>248

249 <!-- Ryu Controller -->250 <vm name="c0" type="lxc" arch="x86_64">251 <filesystem type="cow">../filesystems/rootfs_sdn</filesystem>

Page 146: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

128 Chapter 9. Appendix C: Virtual scenarios

252 <if id="1" net="MgmtNet">253 <ipv4>192.168.1.1/24</ipv4>254 </if>255 <if id="9" net="virbr0">256 <ipv4>dhcp</ipv4>257 </if>258

259 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>260 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>261 <exec seq="on_boot" type="verbatim">262 cat /tmp/hosts >> /etc/hosts263 rm /tmp/hosts264 </exec>265

266 <filetree seq="on_boot" root="/tmp/">../conf/start-ryu.sh</filetree>267 <exec seq="on_boot" type="verbatim">268 cp /tmp/start-ryu.sh /root/start-ryu.sh269 chmod +x root/start-ryu.sh270 rm /tmp/start-ryu.sh271 </exec>272

273 <exec seq="start-ryu" type="verbatim"ostype="exec">./root/start-ryu.sh</exec>↪→

274 </vm>275

276 <!-- Host -->277 <host>278 <hostif net="MgmtNet">279 <ipv4>192.168.1.2/24</ipv4>280 </hostif>281 </host>282

283 </vnx>

9.6 cascade-sw34.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: cascade-sw34

Page 147: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.6. cascade-sw34.xml 129

8 Description: Subscenario of cascade topology network.9 Includes Lagopus switches sw3,sw4 and their associated clients

h3,h4.↪→

10 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>11 -->12

13 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"14 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">15 <global>16 <version>2.0</version>17 <scenario_name>cascade-sw34</scenario_name>18 <automac offset="1"/>19 <vm_mgmt type="none" />20 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">21 <host_mapping />22 </vm_mgmt-->23 <vm_defaults>24 <console id="0" display="no"/>25 <console id="1" display="yes"/>26 </vm_defaults>27 <cmd-seq seq='load-hosts'>bmon</cmd-seq>28 </global>29

30 <!-- Network Definition -->31 <net name="MgmtNet" mode="openvswitch" mtu="1450"/>32 <net name="link23" mode="openvswitch" mtu="1450"/>33 <net name="link34" mode="virtual_bridge"/>34 <net name="sw3-h3" mode="virtual_bridge"/>35 <net name="sw4-h4" mode="virtual_bridge"/>36 <net name="virbr0" mode="virtual_bridge" managed="no"/>37

38 <!-- Clients -->39 <vm name="h3" type="lxc" arch="x86_64">40 <filesystem type="cow">../filesystems/rootfs_lxc</filesystem>41 <if id="1" net="sw3-h3">42 <mac>00:00:00:00:00:03</mac>43 <ipv4>10.0.0.3/24</ipv4>44 </if>45 <if id="9" net="virbr0">46 <ipv4>dhcp</ipv4>47 </if>48

49 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>50 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>51 <exec seq="on_boot" type="verbatim">

Page 148: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

130 Chapter 9. Appendix C: Virtual scenarios

52 cat /tmp/hosts >> /etc/hosts53 rm /tmp/hosts54 </exec>55

56 <exec seq="on_boot" type="verbatim">57 arp -s 10.0.0.1 00:00:00:00:00:0158 arp -s 10.0.0.2 00:00:00:00:00:0259 arp -s 10.0.0.4 00:00:00:00:00:0460 </exec>61

62 <exec seq="on_boot" type="verbatim">63 # Change MgmtNet and TunnNet interfaces MTU64 ifconfig eth1 mtu 145065 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces66 </exec>67

68 <exec seq="bmon" type="verbatim">69 cd /root/bmon70 ./autogen.sh71 ./configure72 make73 make install74 </exec>75

76 <exec seq="load-pcap" type="verbatim">77 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap78 </exec>79 </vm>80

81 <vm name="h4" type="lxc" arch="x86_64">82 <filesystem type="cow">filesystems/rootfs_lxc</filesystem>83 <if id="1" net="sw4-h4">84 <mac>00:00:00:00:00:04</mac>85 <ipv4>10.0.0.4/24</ipv4>86 </if>87 <if id="9" net="virbr0">88 <ipv4>dhcp</ipv4>89 </if>90

91 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>92 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>93 <exec seq="on_boot" type="verbatim">94 cat /tmp/hosts >> /etc/hosts95 rm /tmp/hosts96 </exec>

Page 149: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.6. cascade-sw34.xml 131

97

98 <exec seq="on_boot" type="verbatim">99 arp -s 10.0.0.1 00:00:00:00:00:01

100 arp -s 10.0.0.2 00:00:00:00:00:02101 arp -s 10.0.0.3 00:00:00:00:00:03102 </exec>103

104 <exec seq="on_boot" type="verbatim">105 # Change MgmtNet and TunnNet interfaces MTU106 ifconfig eth1 mtu 1450107 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces108 </exec>109

110 <exec seq="bmon" type="verbatim">111 cd /root/bmon112 ./autogen.sh113 ./configure114 make115 make install116 </exec>117 </vm>118

119 <!-- Lagopus Switches -->120 <vm name="sw3" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"

arch="x86_64" vcpu="2">↪→

121 <mem>2G</mem>122 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-lago.qcow2</filesystem>↪→

123 <if id="1" net="MgmtNet">124 <ipv4>192.168.1.13/24</ipv4>125 </if>126 <if id="2" net="link23"/>127 <if id="3" net="sw3-h3"/>128 <if id="4" net="link34"/>129 <if id="9" net="virbr0">130 <ipv4>dhcp</ipv4>131 </if>132

133 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>134 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>135 <exec seq="on_boot" type="verbatim">136 cat /tmp/hosts >> /etc/hosts137 rm /tmp/hosts138 </exec>139

Page 150: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

132 Chapter 9. Appendix C: Virtual scenarios

140 <exec seq="on_boot" type="verbatim">141 ifconfig eth2 mtu 1450142 sed -i -e '/iface eth2 inet manual/a \ mtu 1450' /etc/network/interfaces143 ifconfig eth3 mtu 1450144 sed -i -e '/iface eth3 inet manual/a \ mtu 1450' /etc/network/interfaces145 ifconfig eth4 mtu 1450146 sed -i -e '/iface eth4 inet manual/a \ mtu 1450' /etc/network/interfaces147 </exec>148

149 <filetree seq="on_boot"root="/tmp/">../conf/start-lagopus-dpdk-dist.sh</filetree>↪→

150 <filetree seq="on_boot"root="/tmp/">../conf/install-lagopus-dpdk.sh</filetree>↪→

151 <exec seq="on_boot" type="verbatim">152 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh153 cp /tmp/start-lagopus-dpdk-dist.sh /root/start-lagopus-dpdk-dist.sh154 mkdir /usr/local/etc/lagopus155 ./root/start-lagopus-dpdk-dist.sh -c tcp:192.168.1.1:6633 -d 3 -i eth2 -i

eth3 -i eth4↪→

156 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

157 #lagopus -l /var/log/lagopus.log -d -- -c3 -n1 -- -p3f --core-assignbalance↪→

158 </exec>159

160 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../SDN-Mon</filetree>161 <exec seq="on_boot,sdnmon" type="verbatim">162 cp -r /tmp/SDN-Mon/* /root/lagopus/163 rm -r /tmp/SDN-Mon164 </exec>165

166 <exec seq="install-lagopus" type="verbatim">167 cd /root &amp;&amp; ./install-lagopus-dpdk.sh168 </exec>169

170 <filetree seq="load-modules" root="/tmp/">../conf/load-modules.sh</filetree>171 <exec seq="load-modules" type="verbatim">172 cd /tmp/ &amp;&amp; ./load-modules.sh173 cd /root/lagopus174 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0 0000:00:06.0↪→

175 </exec>176 </vm>177

178 <vm name="sw4" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

Page 151: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.6. cascade-sw34.xml 133

179 <mem>2G</mem>180 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-lago.qcow2</filesystem>↪→

181 <if id="1" net="MgmtNet">182 <ipv4>192.168.1.14/24</ipv4>183 </if>184 <if id="2" net="link34"/>185 <if id="3" net="sw4-h4"/>186 <if id="9" net="virbr0">187 <ipv4>dhcp</ipv4>188 </if>189

190 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>191 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>192 <exec seq="on_boot" type="verbatim">193 cat /tmp/hosts >> /etc/hosts194 rm /tmp/hosts195 </exec>196

197 <exec seq="on_boot" type="verbatim">198 ifconfig eth2 mtu 1450199 sed -i -e '/iface eth2 inet manual/a \ mtu 1450' /etc/network/interfaces200 ifconfig eth3 mtu 1450201 sed -i -e '/iface eth3 inet manual/a \ mtu 1450' /etc/network/interfaces202 </exec>203

204 <filetree seq="on_boot"root="/tmp/">../conf/start-lagopus-dpdk-dist.sh</filetree>↪→

205 <filetree seq="on_boot"root="/tmp/">../conf/install-lagopus-dpdk.sh</filetree>↪→

206 <exec seq="on_boot" type="verbatim">207 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh208 cp /tmp/start-lagopus-dpdk-dist.sh /root/start-lagopus-dpdk-dist.sh209 mkdir /usr/local/etc/lagopus210 ./root/start-lagopus-dpdk-dist.sh -c tcp:192.168.1.1:6633 -d 4 -i eth2 -i

eth3↪→

211 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

212 #lagopus -l /var/log/lagopus.log -d -- -c3 -n1 -- -p3f --core-assignbalance↪→

213 </exec>214

215 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../SDN-Mon</filetree>216 <exec seq="on_boot,sdnmon" type="verbatim">217 cp -r /tmp/SDN-Mon/* /root/lagopus/

Page 152: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

134 Chapter 9. Appendix C: Virtual scenarios

218 rm -r /tmp/SDN-Mon219 </exec>220

221 <exec seq="install-lagopus" type="verbatim">222 cd /root &amp;&amp; ./install-lagopus-dpdk.sh223 </exec>224

225 <filetree seq="load-modules" root="/tmp/">../conf/load-modules.sh</filetree>226 <exec seq="load-modules" type="verbatim">227 cd /tmp/ &amp;&amp; ./load-modules.sh228 cd /root/lagopus229 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

230 </exec>231 </vm>232

233 </vnx>

9.7 cascade-sw34-pkt.xml

1 <?xml version="1.0" encoding="UTF-8"?>2

3 <!--4 ~~~~~~~~~~~~~~~~~~~~5 VNX Scenario6 ~~~~~~~~~~~~~~~~~~~~7 Name: cascade-sw348 Description: Subscenario of cascade topology network.9 Includes Lagopus switches sw3,sw4 and their associated clients

h3,h4.↪→

10 Author: Ignacio Dominguez Martinez-Casanueva <[email protected]>11 -->12

13 <vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"14 xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">15 <global>16 <version>2.0</version>17 <scenario_name>cascade-sw34</scenario_name>18 <automac offset="1"/>19 <vm_mgmt type="none" />20 <!--vm_mgmt type="private" network="10.250.0.0" mask="24" offset="16">21 <host_mapping />22 </vm_mgmt-->

Page 153: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.7. cascade-sw34-pkt.xml 135

23 <vm_defaults>24 <console id="0" display="no"/>25 <console id="1" display="yes"/>26 </vm_defaults>27 </global>28

29 <!-- Network Definition -->30 <net name="MgmtNet" mode="openvswitch" mtu="1450"/>31 <net name="link23" mode="openvswitch" mtu="1450"/>32 <net name="link34" mode="virtual_bridge"/>33 <net name="sw3-h3" mode="virtual_bridge"/>34 <net name="sw4-h4" mode="virtual_bridge"/>35 <net name="virbr0" mode="virtual_bridge" managed="no"/>36

37 <!-- Clients -->38 <vm name="h3" type="lxc" arch="x86_64">39 <filesystem type="cow">../filesystems/rootfs_lxc</filesystem>40 <if id="1" net="sw3-h3">41 <mac>00:00:00:00:00:03</mac>42 <ipv4>10.0.0.3/24</ipv4>43 </if>44 <if id="9" net="virbr0">45 <ipv4>dhcp</ipv4>46 </if>47

48 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>49 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>50 <exec seq="on_boot" type="verbatim">51 cat /tmp/hosts >> /etc/hosts52 rm /tmp/hosts53 </exec>54

55 <exec seq="on_boot" type="verbatim">56 # Change MgmtNet and TunnNet interfaces MTU57 ifconfig eth1 mtu 145058 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces59 </exec>60

61 <exec seq="on_boot" type="verbatim">62 arp -s 10.0.0.1 00:00:00:00:00:0163 arp -s 10.0.0.2 00:00:00:00:00:0264 </exec>65

66 <exec seq="bmon" type="verbatim">67 cd /root/bmon

Page 154: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

136 Chapter 9. Appendix C: Virtual scenarios

68 ./autogen.sh69 ./configure70 make71 make install72 </exec>73

74 <exec seq="load-pcap" type="verbatim">75 #tcpreplay -i eth1 -K --pps 100 --loop 1 output.pcap76 </exec>77 </vm>78

79 <!-- <vm name="h4" type="lxc" arch="x86_64">80 <filesystem type="cow">filesystems/rootfs_lxc</filesystem>81 <if id="1" net="sw4-h4">82 <ipv4>10.0.0.4/24</ipv4>83 </if>84 <if id="9" net="virbr0">85 <ipv4>dhcp</ipv4>86 </if>87 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>88 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>89 <exec seq="on_boot" type="verbatim">90 cat /tmp/hosts >> /etc/hosts91 rm /tmp/hosts92 </exec>93 <exec seq="on_boot" type="verbatim">94 # Change MgmtNet and TunnNet interfaces MTU95 ifconfig eth1 mtu 145096 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces97 </exec>98 </vm> -->99

100 <!-- Lagopus Switches -->101 <vm name="sw3" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"

arch="x86_64" vcpu="2">↪→

102 <mem>2G</mem>103 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-lago.qcow2</filesystem>↪→

104 <if id="1" net="MgmtNet">105 <ipv4>192.168.1.13/24</ipv4>106 </if>107 <if id="2" net="link23"/>108 <if id="3" net="sw3-h3"/>109 <if id="4" net="link34"/>110 <if id="9" net="virbr0">

Page 155: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.7. cascade-sw34-pkt.xml 137

111 <ipv4>dhcp</ipv4>112 </if>113

114 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>115 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>116 <exec seq="on_boot" type="verbatim">117 cat /tmp/hosts >> /etc/hosts118 rm /tmp/hosts119 </exec>120

121 <exec seq="on_boot" type="verbatim">122 # Change MgmtNet and TunnNet interfaces MTU123 ifconfig eth1 mtu 1450124 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces125 ifconfig eth2 mtu 1450126 sed -i -e '/iface eth2 inet manual/a \ mtu 1450' /etc/network/interfaces127 ifconfig eth3 mtu 1450128 sed -i -e '/iface eth3 inet manual/a \ mtu 1450' /etc/network/interfaces129 ifconfig eth4 mtu 1450130 sed -i -e '/iface eth4 inet manual/a \ mtu 1450' /etc/network/interfaces131 </exec>132

133 <filetree seq="on_boot"root="/tmp/">../conf/start-lagopus-dpdk-dist.sh</filetree>↪→

134 <filetree seq="on_boot"root="/tmp/">../conf/install-lagopus-dpdk.sh</filetree>↪→

135 <exec seq="on_boot" type="verbatim">136 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh137 cp /tmp/start-lagopus-dpdk-dist.sh /root/start-lagopus-dpdk-dist.sh138 mkdir /usr/local/etc/lagopus139 ./root/start-lagopus-dpdk-dist.sh -c tcp:192.168.1.1:6633 -d 3 -i eth2 -i

eth3 -i eth4↪→

140 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

141 #lagopus -l /var/log/lagopus.log -d -- -c3 -n1 -- -p3f --core-assignbalance↪→

142 </exec>143

144 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../SDN-Mon</filetree>145 <exec seq="on_boot,sdnmon" type="verbatim">146 cp -r /tmp/SDN-Mon/* /root/lagopus/147 rm -r /tmp/SDN-Mon148 </exec>149

150 <exec seq="install-lagopus" type="verbatim">

Page 156: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

138 Chapter 9. Appendix C: Virtual scenarios

151 cd /root &amp;&amp; ./install-lagopus-dpdk.sh152 </exec>153

154 <filetree seq="load-modules" root="/tmp/">../conf/load-modules.sh</filetree>155 <exec seq="load-modules" type="verbatim">156 cd /tmp/ &amp;&amp; ./load-modules.sh157 cd /root/lagopus158 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0 0000:00:06.0↪→

159 </exec>160 </vm>161

162 <vm name="sw4" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

163 <mem>2G</mem>164 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-lago.qcow2</filesystem>↪→

165 <if id="1" net="MgmtNet">166 <ipv4>192.168.1.14/24</ipv4>167 </if>168 <if id="2" net="link34"/>169 <if id="3" net="sw4-h4"/>170 <if id="9" net="virbr0">171 <ipv4>dhcp</ipv4>172 </if>173

174 <filetree seq="on_boot" root="/tmp/">../conf/hosts</filetree>175 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>176 <exec seq="on_boot" type="verbatim">177 cat /tmp/hosts >> /etc/hosts178 rm /tmp/hosts179 </exec>180

181 <exec seq="on_boot" type="verbatim">182 # Change MgmtNet and TunnNet interfaces MTU183 ifconfig eth1 mtu 1450184 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces185 ifconfig eth2 mtu 1450186 sed -i -e '/iface eth2 inet manual/a \ mtu 1450' /etc/network/interfaces187 ifconfig eth3 mtu 1450188 sed -i -e '/iface eth3 inet manual/a \ mtu 1450' /etc/network/interfaces189 </exec>190

191 <filetree seq="on_boot"root="/tmp/">../conf/start-lagopus-dpdk-dist.sh</filetree>↪→

Page 157: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

9.7. cascade-sw34-pkt.xml 139

192 <filetree seq="on_boot"root="/tmp/">../conf/install-lagopus-dpdk.sh</filetree>↪→

193 <exec seq="on_boot" type="verbatim">194 cp /tmp/install-lagopus-dpdk.sh /root/install-lagopus-dpdk.sh195 cp /tmp/start-lagopus-dpdk-dist.sh /root/start-lagopus-dpdk-dist.sh196 mkdir /usr/local/etc/lagopus197 ./root/start-lagopus-dpdk-dist.sh -c tcp:192.168.1.1:6633 -d 4 -i eth2 -i

eth3↪→

198 cp /tmp/lagopus.dsl /usr/local/etc/lagopus/lagopus.dsl &amp;&amp; rm/tmp/lagopus.dsl↪→

199 #lagopus -l /var/log/lagopus.log -d -- -c3 -n1 -- -p3f --core-assignbalance↪→

200 </exec>201

202 <filetree seq="on_boot,sdnmon" root="/tmp/">../../../SDN-Mon</filetree>203 <exec seq="on_boot,sdnmon" type="verbatim">204 cp -r /tmp/SDN-Mon/* /root/lagopus/205 rm -r /tmp/SDN-Mon206 </exec>207

208 <exec seq="install-lagopus" type="verbatim">209 cd /root &amp;&amp; ./install-lagopus-dpdk.sh210 </exec>211

212 <filetree seq="load-modules" root="/tmp/">../conf/load-modules.sh</filetree>213 <exec seq="load-modules" type="verbatim">214 cd /tmp/ &amp;&amp; ./load-modules.sh215 cd /root/lagopus216 sudo ./src/dpdk/tools/dpdk-devbind.py --bind=igb_uio 0000:00:04.0

0000:00:05.0↪→

217 </exec>218

219 </vm>220

221 <vm name="h4" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk"arch="x86_64" vcpu="2">↪→

222 <mem>2G</mem>223 <filesystem

type="cow">/usr/share/vnx/filesystems/vnx_rootfs_kvm_ubuntu64-16.04-v025-pktgen.qcow2</filesystem>↪→

224 <if id="1" net="sw4-h4">225 <mac>00:00:00:00:00:04</mac>226 <ipv4>10.0.0.4/24</ipv4>227 </if>228 <if id="9" net="virbr0">229 <ipv4>dhcp</ipv4>

Page 158: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

140 Chapter 9. Appendix C: Virtual scenarios

230 </if>231

232 <filetree seq="on_boot" root="/tmp/">conf/hosts</filetree>233 <filetree seq="on_boot" root="/tmp/">bin/test-MTU.sh</filetree>234 <exec seq="on_boot" type="verbatim">235 cat /tmp/hosts >> /etc/hosts236 rm /tmp/hosts237 </exec>238

239 <exec seq="on_boot" type="verbatim">240 # Change MgmtNet and TunnNet interfaces MTU241 ifconfig eth1 mtu 1450242 sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces243 </exec>244

245 <exec seq="build-dpdk" type="verbatim" ostype="exec">246 cd $RTE_SDK247 make install T=x86_64-native-linuxapp-gcc248 </exec>249

250 <exec seq="build-pktgen" type="verbatim" ostype="exec">251 cd /root/pktgen-dpdk252 make253 </exec>254

255 <exec seq="pktgen" type="verbatim" ostype="exec">256 ./app/app/x86_64-native-linuxapp-gcc/app/pktgen -c 3 -n 2 --proc-type auto

-- -p 0x1 -P -m "1.0"↪→

257 </exec>258 </vm>259

260 </vnx>

Page 159: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

141

Chapter 10

Appendix D: Distributed setup

This appendix contains Bash scripts for creation and deletion of GRE tunnelsrequired for distributed scenarios.

10.1 create-tunnel

1 #!/bin/bash2

3 #4 # GRE Tunnel creation script5 # Based on VNX's Openstack Mitaka tutorial scenario6 #7

8 TUNNSWITCH_NAME='lab'9

10 if [ "$#" -ne 1 ]; then11 echo "--"12 echo "-- Usage: create-tunnel <ip_address|name>"13 echo "--"14 exit 115 fi16

17 TUNN_DST=$118

19 # Test an IP address for validity:20 # From: http://www.linuxjournal.com/content/validating-ip-address-bash-script21 # Usage:22 # valid_ip IP_ADDRESS23 # if [[ $? -eq 0 ]]; then echo good; else echo bad; fi24 # OR25 # if valid_ip IP_ADDRESS; then echo good; else echo bad; fi26 #27 function valid_ip()28 {29 local ip=$1

Page 160: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

142 Chapter 10. Appendix D: Distributed setup

30 local stat=131

32 if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then33 OIFS=$IFS34 IFS='.'35 ip=($ip)36 IFS=$OIFS37 [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \38 && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]39 stat=$?40 fi41 return $stat42 }43

44

45

46 #47 # Main48 #49 if ! valid_ip $TUNN_DST; then50

51 if grep -q "^[a-zAZ]" <<< $TUNN_DST; then52 # It is a name, we try to translate...53 IPADDR=$( getent hosts $TUNN_DST | awk '{ print $1 }' )54 echo $IPADDR55 if ! valid_ip $IPADDR; then56 echo "--"57 echo "-- ERROR: cannot get IP address associated to $TUNN_DST domain

name"↪→

58 echo "--"59 exit 160 else61 MSG="$TUNN_DST ($IPADDR)"62 TUNN_DST=$IPADDR63 fi64 else65 # Not a name, it is an error...66 echo "--"67 echo "-- ERROR: $TUNN_DST is not a valid IPv4 address"68 echo "--"69 exit 170 fi71 else72 MSG=$TUNN_DST73 fi

Page 161: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

10.1. create-tunnel 143

74

75 echo "--"76 echo "-- Creating tunnel to $MSG.."77 echo "--"78

79 if ! sudo ovs-vsctl br-exists ${TUNNSWITCH_NAME}; then80 # Create switches81 echo "-- Creating ${TUNNSWITCH_NAME} switch..."82 sudo ovs-vsctl --may-exist add-br ${TUNNSWITCH_NAME}83 echo "-- Creating ${TUNNSWITCH_NAME}-mgmtnet fake switch..."84 sudo ovs-vsctl --may-exist add-br ${TUNNSWITCH_NAME}-mgmtnet

${TUNNSWITCH_NAME} 1000↪→

85 echo "-- Creating ${TUNNSWITCH_NAME}-linknet fake switch..."86 sudo ovs-vsctl --may-exist add-br ${TUNNSWITCH_NAME}-linknet

${TUNNSWITCH_NAME} 1001↪→

87

88 echo "-- Establishing connections for MgmtNet scenario network..."89 # Create MgmtNet veth pair and connect to switches90 sudo ip link add ${TUNNSWITCH_NAME}-mgmt1 type veth peer name

${TUNNSWITCH_NAME}-mgmt2↪→

91 sudo ip link set ${TUNNSWITCH_NAME}-mgmt1 up92 sudo ip link set ${TUNNSWITCH_NAME}-mgmt2 up93 sudo ovs-vsctl add-port MgmtNet ${TUNNSWITCH_NAME}-mgmt194 sudo ovs-vsctl add-port ${TUNNSWITCH_NAME}-mgmtnet ${TUNNSWITCH_NAME}-mgmt295

96 echo "-- Establishing connections for link23 scenario network..."97 # Create link23 veth pair and connect to switches98 sudo ip link add ${TUNNSWITCH_NAME}-linkn1 type veth peer name

${TUNNSWITCH_NAME}-linkn2↪→

99 sudo ip link set ${TUNNSWITCH_NAME}-linkn1 up100 sudo ip link set ${TUNNSWITCH_NAME}-linkn2 up101 sudo ovs-vsctl add-port link23 ${TUNNSWITCH_NAME}-linkn1102 sudo ovs-vsctl add-port ${TUNNSWITCH_NAME}-linknet ${TUNNSWITCH_NAME}-linkn2103

104 fi105

106 echo "-- Establishing tunnel to $TUNN_DST..."107 sudo ovs-vsctl add-port ${TUNNSWITCH_NAME} tun-$TUNN_DST -- set Interface

tun-$TUNN_DST type=gre options:remote_ip=$TUNN_DST↪→

108 echo "-- ...done."

Page 162: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

144 Chapter 10. Appendix D: Distributed setup

10.2 cleanup-tunnel

1 #!/bin/bash2

3 #4 # This script cleans GRE tunnel bridges5 #6

7 echo "-----------------------------------------------------------------------"8 echo "--- Cleaning GRE tunnel ..."9 echo "---"

10

11 sudo ovs-vsctl del-br MgmtNet12 sudo ovs-vsctl del-br link2313 sudo ovs-vsctl del-br lab14

15 echo "---"16 echo "--- Done"17 echo "-----------------------------------------------------------------------"

Page 163: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

145

Chapter 11

Appendix E: User applications

This appendix contains a set of example Python applications that benefit fromdeployed SDN modules such as SDN-Mon or built-in Topology Discovery. Theseapplications talk to the Ryu SDN controller sending REST queries through theprovided NorthBound Interface. Additionally, this appendix includes the API RESTextension of the SDN-Mon module.

11.1 sdnmon_rest.py

1 import json2

3 from ryu.app.wsgi import ControllerBase4 from ryu.app.wsgi import Response5 from ryu.app.wsgi import route6 from ryu.app.wsgi import WSGIApplication7

8 from ryu.lib import dpid as dpid_lib9 import ipaddress

10

11 import sdnmon_stats_monitor12

13 url = '/sdnmon'14 _IPPROTO_LEN = 3 # Highest value is 132 (SCTP) according to OF1.5 spec15 IPPROTO_PATTERN = r'[0-9]{1,%d}' % _IPPROTO_LEN16

17 _PORT_LEN = 518 PORT_PATTERN = r'\d{1,%d}' % _PORT_LEN19 PORT_RANGE_PATTERN = r'\d{1,%d}(?:-\d{1,%d})*' % (_PORT_LEN, _PORT_LEN)20

21 IP_ADDRESS_PATTERN =r'(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])'↪→

22

23 _PACKET_COUNT_LEN = 1000000 # Suitable value still to be determined24 PACKET_COUNT_PATTERN = r'[0-9]{1,%d}' % _PACKET_COUNT_LEN25

Page 164: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

146 Chapter 11. Appendix E: User applications

26 _BYTE_COUNT_LEN = 1000000 # Suitable value still to be determined27 BYTE_COUNT_PATTERN = r'[0-9]{1,%d}' % _BYTE_COUNT_LEN28

29 # monitoring_table = {30 # 1: {31 # 3507289692362160130: {32 # 'src_ip': '74.125.226.77',33 # 'packet_count': 2,34 # 'src_port': 20480,35 # 'dst_port': 51148,36 # 'proto': 6,37 # 'byte_count': 128,38 # 'dst_ip': '192.168.1.101'39 # },40 # 8340101101733500949: {41 # 'src_ip': '172.16.133.34',42 # 'packet_count': 10,43 # 'src_port': 52420,44 # 'dst_port': 16405,45 # 'proto': 6,46 # 'byte_count': 616,47 # 'dst_ip': '192.168.1.101'48 # }49 # },50 # 2: {51 # 6971737281905252357: {52 # 'src_ip': '172.16.133.78',53 # 'packet_count': 1,54 # 'src_port': 10982,55 # 'dst_port': 56639,56 # 'proto': 6,57 # 'byte_count': 64,58 # 'dst_ip': '192.168.1.101'59 # }60 # }61 # }62

63

64 # class SDNMonAPI(app_manager.RyuApp):65 class SDNMonAPI(sdnmon_stats_monitor.SDNMONMonitor):66 _CONTEXTS = {67 'wsgi': WSGIApplication68 }69

70 def __init__(self, *args, **kwargs):

Page 165: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.1. sdnmon_rest.py 147

71 super(SDNMonAPI, self).__init__(*args, **kwargs)72 wsgi = kwargs['wsgi']73 wsgi.register(SDNMonController, {'sdnmon_api_app': self})74

75

76 class SDNMonController(ControllerBase):77

78 def __init__(self, req, link, data, **config):79 super(SDNMonController, self).__init__(req, link, data, **config)80 self.sdnmon_api_app = data['sdnmon_api_app']81 self.table = self.sdnmon_api_app.global_m_tables82

83 def find_rows(self, field, match):84

85 sel_flows = {}86 for switchID in self.table.iterkeys():87 for entryHash, tuples in self.table[switchID].iteritems():88 if tuples[field] == match:89 if switchID in sel_flows:90 sel_flows[switchID][entryHash] = tuples91 else:92 sel_flows[switchID] = {entryHash: tuples}93

94 return sel_flows95

96 def find_in_range(self, field, start, end):97

98 sel_flows = {}99 for value in range(start, end + 1):

100 for switchID in self.table.iterkeys():101 for entryHash, tuples in self.table[switchID].iteritems():102 if tuples[field] == value:103 if switchID in sel_flows:104 sel_flows[switchID][entryHash] = tuples105 else:106 sel_flows[switchID] = {entryHash: tuples}107

108 return sel_flows109

110 def find_threshold(self, field, threshold):111

112 sel_flows = {}113 for switchID in self.table.iterkeys():114 for entryHash, tuples in self.table[switchID].iteritems():115 if tuples[field] > threshold:

Page 166: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

148 Chapter 11. Appendix E: User applications

116 if switchID in sel_flows:117 sel_flows[switchID][entryHash] = tuples118 else:119 sel_flows[switchID] = {entryHash: tuples}120

121 return sel_flows122

123 def find_in_network(self, field, address, mask):124

125 sel_flows = {}126 for switchID in self.table.iterkeys():127 for entryHash, tuples in self.table[switchID].iteritems():128 if self.belongs_to_network(tuples[field], address, mask):129 if switchID in sel_flows:130 sel_flows[switchID][entryHash] = tuples131 else:132 sel_flows[switchID] = {entryHash: tuples}133

134 return sel_flows135

136 #137 # MORE INFORMATION: https://pynet.twb-tech.com/blog/python/ipaddress.html138 #139

140 @staticmethod141 def belongs_to_network(ip_address, network, mask):142

143 target_addr = ipaddress.ip_address(unicode(ip_address))144 net_addr = ipaddress.ip_network(unicode(network + '/' + mask))145

146 return target_addr in net_addr147

148 #149 # TODO:150 #151 # Instead of manually setting _PROTO_LEN,152 # maximum allowed length of proto code should determined153 # by checking all imported variables in ryu.lib.packet.in_proto154 #155 # def check_ip_protocols(self):156 # best = 0157 # for value in dir(ip_protocols):158 # if 'IPPROTO' in value:159 # code = getattr(ip_protocols,value)160 # print code

Page 167: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.1. sdnmon_rest.py 149

161 # if dir(ip_protocols).value > best:162 # best = ip_protocols.value163 # print best164

165 #166 #167 # REST API for SDN-Mon framework168 #169 #170 # get monitoring table171 # GET /sdnmon/table172 #173 # get all flows of specific switch174 # GET /sdnmon/sw_id/<dpid>175 #176 # get all flows of specific source IP address177 # GET /sdnmon/src_ip/<ip_address>178 #179 # get all flows matched by specific source network180 # GET /sdnmon/src_ip/<ip_address>/netmask/<mask>181 #182 # get all flows of specific destination IP address183 # GET /sdnmon/dst_ip/<ip_address>184 #185 # get all flows matched by specific destination network186 # GET /sdnmon/dst_ip/<ip_address>/netmask/<mask>187 #188 # get all flows of specific IP protocol189 # GET /sdnmon/proto/<protocol_id>190 #191 # get all flows of specific source port192 # GET /sdnmon/src_port/<port_number>193 #194 # get all flows matched by source port range195 # GET /sdnmon/src_port/<start_port_number>-<end_port_number>196 #197 # get all flows of specific destination port198 # GET /sdnmon/dst_port/<port_number>199 #200 # get all flows matched by destination port range201 # GET /sdnmon/dst_port/<start_port_number>-<end_port_number>202 #203 # get all flows whose packet count value exceeds specified threshold204 # GET /sdnmon/packet_count/<threshold>205 #

Page 168: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

150 Chapter 11. Appendix E: User applications

206 # get all flows whose byte count value exceeds specified threshold207 # GET /sdnmon/byte_count/<threshold>208 #209 # where210 # <dpid>: datapath id in 16-digit hex211

212 @route('sdnmon', url + '/table', methods=['GET'],213 requirements={})214 def get_table(self, req, **kwargs):215 sel_flows = {'switches': {}}216 for switchID in self.table.iterkeys():217 for entryHash, tuples in self.table[switchID].iteritems():218 if switchID in sel_flows['switches']:219 sel_flows['switches'][switchID][entryHash] = tuples220 else:221 sel_flows['switches'][switchID] = {entryHash: tuples}222

223 sel_flows['interval'] = self.sdnmon_api_app.query_time_interval224 sel_flows['timestamp'] = self.sdnmon_api_app.query_time225

226 body = json.dumps(sel_flows)227 return Response(content_type='application/json', body=body)228

229 @route('sdnmon', url + '/sw_id/{dpid}', methods=['GET'],230 requirements={'dpid': dpid_lib.DPID_PATTERN})231 def get_switch(self, req, **kwargs):232

233 sel_flows = {'entries': {}}234 sw_id = dpid_lib.str_to_dpid(kwargs['dpid'])235 for entryHash, tuples in self.table[sw_id].iteritems():236 sel_flows['entries'] = {entryHash: tuples}237

238 sel_flows['interval'] = self.sdnmon_api_app.query_time_interval239 sel_flows['timestamp'] = self.sdnmon_api_app.query_time240

241 body = json.dumps(sel_flows)242 return Response(content_type='application/json', body=body)243

244 @route('sdnmon', url + '/src_ip/{ip_address}', methods=['GET'],245 requirements={'ip_address': IP_ADDRESS_PATTERN})246 def get_src_ip(self, req, **kwargs):247

248 ip_address = str(kwargs['ip_address'])249 body = json.dumps(self.find_rows('src_ip', ip_address))250 return Response(content_type='application/json', body=body)

Page 169: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.1. sdnmon_rest.py 151

251

252 @route('sdnmon', url + '/src_ip/{ip_address}/netmask/{netmask}',253 methods=['GET'],254 requirements={'ip_address': IP_ADDRESS_PATTERN,255 'netmask': IP_ADDRESS_PATTERN})256 def get_src_ip_network(self, req, **kwargs):257

258 ip_address = kwargs['ip_address']259 netmask = kwargs['netmask']260 body = json.dumps(self.find_in_network('src_ip', ip_address, netmask))261 return Response(content_type='application/json', body=body)262

263 @route('sdnmon', url + '/dst_ip/{ip_address}', methods=['GET'],264 requirements={'ip_address': IP_ADDRESS_PATTERN})265 def get_dst_ip(self, req, **kwargs):266

267 ip_address = str(kwargs['ip_address'])268 body = json.dumps(self.find_rows('dst_ip', ip_address))269 return Response(content_type='application/json', body=body)270

271 @route('sdnmon', url + '/dst_ip/{ip_address}/netmask/{netmask}',272 methods=['GET'],273 requirements={'ip_address': IP_ADDRESS_PATTERN,274 'netmask': IP_ADDRESS_PATTERN})275 def get_dst_ip_network(self, req, **kwargs):276

277 ip_address = kwargs['ip_address']278 netmask = kwargs['netmask']279 body = json.dumps(self.find_in_network('dst_ip', ip_address, netmask))280 return Response(content_type='application/json', body=body)281

282 @route('sdnmon', url + '/proto/{proto_id}', methods=['GET'],283 requirements={'proto_id': IPPROTO_PATTERN})284 def get_proto(self, req, **kwargs):285

286 proto_id = int(kwargs['proto_id'])287 body = json.dumps(self.find_rows('proto', proto_id))288 return Response(content_type='application/json', body=body)289

290 @route('sdnmon', url + '/src_port/{port_number}', methods=['GET'],291 requirements={'port_number': PORT_PATTERN})292 def get_src_port(self, req, **kwargs):293

294 src_port = int(kwargs['port_number'])295 print src_port

Page 170: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

152 Chapter 11. Appendix E: User applications

296 body = json.dumps(self.find_rows('src_port', src_port))297 return Response(content_type='application/json', body=body)298

299 @route('sdnmon', url + '/src_port/{start_port}-{end_port}',300 methods=['GET'],301 requirements={'start_port': PORT_RANGE_PATTERN,302 'end_port': PORT_RANGE_PATTERN})303 def get_src_port_range(self, req, **kwargs):304

305 start_port = int(kwargs['start_port'])306 end_port = int(kwargs['end_port'])307

308 body = json.dumps(self.find_in_range('src_port', start_port, end_port))309 return Response(content_type='application/json', body=body)310

311 @route('sdnmon', url + '/dst_port/{port_number}', methods=['GET'],312 requirements={'port_number': PORT_PATTERN})313 def get_dst_port(self, req, **kwargs):314

315 dst_port = int(kwargs['port_number'])316 body = json.dumps(self.find_rows('dst_port', dst_port))317 return Response(content_type='application/json', body=body)318

319 @route('sdnmon', url + '/dst_port/{start_port}-{end_port}',320 methods=['GET'],321 requirements={'start_port': PORT_RANGE_PATTERN,322 'end_port': PORT_RANGE_PATTERN})323 def get_dst_port_range(self, req, **kwargs):324

325 start_port = int(kwargs['start_port'])326 end_port = int(kwargs['end_port'])327

328 body = json.dumps(self.find_in_range('dst_port', start_port, end_port))329 return Response(content_type='application/json', body=body)330

331 @route('sdnmon', url + '/packet_count/{threshold}', methods=['GET'],332 requirements={'threshold': PACKET_COUNT_PATTERN})333 def get_packet_count(self, req, **kwargs):334

335 threshold = int(kwargs['threshold'])336 body = json.dumps(self.find_threshold('packet_count', threshold))337 return Response(content_type='application/json', body=body)338

339 @route('sdnmon', url + '/byte_count/{threshold}', methods=['GET'],340 requirements={'threshold': BYTE_COUNT_PATTERN})

Page 171: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.1. sdnmon_rest.py 153

341 def get_byte_count(self, req, **kwargs):342

343 threshold = int(kwargs['threshold'])344 body = json.dumps(self.find_threshold('byte_count', threshold))345 return Response(content_type='application/json', body=body)346

347 # Taken form simple_switch_rest_13.py implementation348 # Response is MAC forwarding table 'destination_mac':'output_port'349 @route('sdnmon', url + '/forwarding/{dpid}', methods=['GET'],350 requirements={'dpid': dpid_lib.DPID_PATTERN})351 def list_mac_table(self, req, **kwargs):352

353 simple_switch = self.sdnmon_api_app354 dpid = dpid_lib.str_to_dpid(kwargs['dpid'])355

356 if dpid not in simple_switch.mac_to_port:357 return Response(status=404)358

359 mac_table = simple_switch.mac_to_port.get(dpid, {})360 body = json.dumps(mac_table)361 return Response(content_type='application/json', body=body)362

363 @route('sdnmon', url + '/forwarding/all', methods=['GET'],364 requirements={'dpid': dpid_lib.DPID_PATTERN})365 def list_all_mac_tables(self, req, **kwargs):366

367 simple_switch = self.sdnmon_api_app368 sel_rules = {}369

370 # This is risky as we assume all switches are monitoring371 for dpid in self.table.iterkeys():372 mac_table = simple_switch.mac_to_port.get(dpid, {})373 sel_rules[dpid] = mac_table374

375 body = json.dumps(sel_rules)376 return Response(content_type='application/json', body=body)377

378 @route('sdnmon', url + '/paths', methods=['GET'],379 requirements={})380 def get_paths_table(self, req, **kwargs):381

382 paths_table = self.sdnmon_api_app.paths_table383

384 body = json.dumps(paths_table)385 return Response(content_type='application/json', body=body)

Page 172: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

154 Chapter 11. Appendix E: User applications

11.2 user_app_top_five.py

1 import requests2 import json3 from prettytable import PrettyTable4 import time5 import curses6 import operator7

8 from topo_app import NetworkTopology9

10 MAX_TOP_TALKERS = 511

12

13 def is_top_talker(talker, table, field, field_addition):14 # If less than max length add it anyway15 if len(table[field]) < MAX_TOP_TALKERS:16 table[field].append((talker, field_addition))17 # Check if it is a new top-talker18 else:19 removed = None20 addition = 021

22 for target, count in table[field]:23 # Replace with new top-talker24 if count < field_addition:25 addition = field_addition26 removed = (target, count)27 # An existing item is to be pushed28 if removed is not None:29 table[field].remove(removed)30 table[field].append((talker, addition))31

32 # Order top-talkers list33 # More info at:34 # http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value35 sorted_x = sorted(table[field], key=operator.itemgetter(1))36 sorted_x.reverse()37

38 table[field] = sorted_x39

40

41 def display_table():42

43 topo = NetworkTopology()

Page 173: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.2. user_app_top_five.py 155

44

45 top_talkers_byte = {46 'src_ip': [],47 'dst_ip': [],48 'src_port': [],49 'dst_port': [],50 '5-tuple': [],51 }52

53 top_talkers_packet = {54 'src_ip': [],55 'dst_ip': [],56 'src_port': [],57 'dst_port': [],58 '5-tuple': [],59 }60

61 latest_timestamp = 062 jData = {}63

64 # scr = curses.initscr()65

66 while(True):67 r = requests.get('http://192.168.1.1:8080/sdnmon/table')68 if(r.ok and json.loads(r.content)['timestamp'] > latest_timestamp):69 jData = json.loads(r.content)70 latest_timestamp = jData['timestamp']71 switches = jData['switches']72

73 # Following code works top_talkers_byte74 for field in top_talkers_byte.iterkeys():75 checked_talkers = []76

77 if field != '5-tuple':78 for switchID, entries in switches.iteritems():79 for tuples in entries.itervalues():80 talker = tuples[field]81 byte_addition = 082 if talker not in checked_talkers:83 # Find talker among all monitoring entries84 for switchID2, entries2 in switches.iteritems():85 for tuples2 in entries2.itervalues():86 # Addition of matched monitoring87 # entries88 if talker == tuples2[field]:

Page 174: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

156 Chapter 11. Appendix E: User applications

89 byte_addition += tuples2[90 'byte_count']91

92 # Check if top-talker after addition of93 # combined flows94 is_top_talker(95 talker,96 top_talkers_byte,97 field,98 byte_addition)99

100 # Discard checked talker for next lookups101 checked_talkers.append(talker)102

103 else:104 for switchID, entries in switches.iteritems():105 for entryHash, tuples in entries.iteritems():106 is_top_talker(107 entryHash,108 top_talkers_byte,109 field,110 tuples['byte_count'])111

112 # Following code works top_talkers_packet:113 for field in top_talkers_packet.iterkeys():114 checked_talkers = []115

116 if field != '5-tuple':117 for switchID, entries in switches.iteritems():118 for tuples in entries.itervalues():119 talker = tuples[field]120 packet_addition = 0121 if talker not in checked_talkers:122 # Find talker among all monitoring entries123 for switchID2, entries2 in switches.iteritems():124 for tuples2 in entries2.itervalues():125 # Addition of matched monitoring126 # entries127 if talker == tuples2[field]:128 packet_addition += tuples2[129 'packet_count']130

131 # Check if top-talker after addition of132 # combined flows133 is_top_talker(

Page 175: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.2. user_app_top_five.py 157

134 talker,135 top_talkers_packet,136 field,137 packet_addition)138

139 # Discard checked talker for next lookups140 checked_talkers.append(talker)141

142 else:143 for switchID, entries in switches.iteritems():144 for entryHash, tuples in entries.iteritems():145 is_top_talker(146 entryHash,147 top_talkers_packet,148 field,149 tuples['packet_count'])150

151 topo.discover()152 print topo.print_table()153

154 # TODO155 # Move this code to an external function156 r_paths = requests.get('http://192.168.1.1:8080/sdnmon/paths')157 if(r_paths.ok):158 paths = json.loads(r_paths.content)159 for entryHash, switchList in paths.iteritems():160 for switch in switchList:161 print switch162

163

164 time.sleep(jData['interval'])165

166 else:167 r.raise_for_status()168 exit()169

170 timestamp = jData['timestamp']171 table_byte = PrettyTable(['timestamp = %s' % latest_timestamp,172 'byte_count'])173 table_byte.align['byte_count'] = 'r'174 for field, talkers in top_talkers_byte.iteritems():175 for talker, count in talkers:176 table_byte.add_row(['%s = %s' % (field, talker),177 '%s' % count])178

Page 176: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

158 Chapter 11. Appendix E: User applications

179 print table_byte.get_string()180

181 table_packet = PrettyTable(['timestamp = %s' % latest_timestamp,182 'packet_count'])183 table_packet.align['packet_count'] = 'r'184 for field, talker in top_talkers_packet.iteritems():185 for talker, count in talkers:186 table_packet.add_row(['%s = %s' % (field, talker),187 '%s' % count])188

189 print table_packet.get_string()190

191 localtime = time.strftime("%a, %d %b %Y %H:%M:%S +0000",192 time.localtime())193 scr.addstr(2, 2, "******* Network's top-talkers *******")194 scr.addstr(4, 2, str("Local Time: " + localtime))195 scr.addstr(6, 0, str(table_byte))196 scr.addstr(16, 0, str(table_packet))197 scr.refresh()198

199 time.sleep(jData['interval'])200

201 # # Close curses202 curses.endwin()203

204 Start real-time stats display205 try:206 curses.wrapper(display_table)207 except KeyboardInterrupt:208 print "Got KeyboardInterrupt exception. Exiting..."209 exit()210

211 if __name__ == '__main__':212 display_table()

11.3 topo_app.py

1 import requests2 import json3

4

5 class NetworkTopology:6

Page 177: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.3. topo_app.py 159

7 def __init__(self):8 self.switches = {}9 self.hosts = {}

10 self.links = {}11 self.url = 'http://192.168.1.1:8080/v1.0/topology/'12 # Interface structure (switchID, portID)13 self.router = []14 self.routing_table = {}15

16 def discover(self):17 r_switches = requests.get(self.url + 'switches')18 r_hosts = requests.get(self.url + 'hosts')19 r_links = requests.get(self.url + 'links')20 if (r_switches.ok and r_hosts.ok and r_links.ok):21

22 self.switches = json.loads(r_switches.content)23 self.hosts = json.loads(r_hosts.content)24 self.links = json.loads(r_links.content)25

26 self.__build_router()27 self.__build_table()28

29 else:30 # Not correct way31 r_switches.raise_for_status()32 r_hosts.raise_for_status()33 r_links.raise_for_status()34 exit()35

36 def print_switches(self):37

38 print "*"39 print "******* Network's Switches *******"40 print "*"41 print json.dumps(self.switches, sort_keys=True,42 indent=4, separators=(',', ': '))43 print "*"44

45 def print_hosts(self):46

47 print "*"48 print "******** Network's Hosts *********"49 print "*"50 print json.dumps(self.hosts, sort_keys=True,51 indent=4, separators=(',', ': '))

Page 178: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

160 Chapter 11. Appendix E: User applications

52 print "*"53

54 def print_links(self):55

56 print "*"57 print "******** Network's Links *********"58 print "*"59 print json.dumps(self.links, sort_keys=True,60 indent=4, separators=(',', ': '))61 print "*"62

63 def print_table(self):64

65 print "*"66 print "******** Network's Routing Table *********"67 print "*"68 print "*** IP Address => ( SwitchID, PortID ) ***"69 print "*"70 print json.dumps(self.routing_table, sort_keys=True,71 indent=4, separators=(',', ': '))72 print "*"73

74 def __build_router(self):75

76 # Reset big router to support new blocked ports77 self.router = []78 for switch in self.switches:79 used_ports = []80 for port in switch['ports']:81 for link in self.links:82 for key, value in link.iteritems():83 if switch['dpid'] == value['dpid'] and port['port_no'] ==

value['port_no']:↪→

84 used_ports.append(port)85

86 if port not in used_ports:87 interface = (switch['dpid'], port['port_no'])88 if interface not in self.router:89 self.router.append(interface)90

91 def __build_table(self):92 for host in self.hosts:93 for switchID, portID in self.router:94 if switchID == host['port']['dpid'] and portID ==

host['port']['port_no']:↪→

Page 179: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

11.4. active_flows.py 161

95 for address in host['ipv4']:96 self.routing_table[address] = (switchID, portID)

11.4 active_flows.py

1 import requests2 import json3 import time4 import os5

6 MAX_QUERY_NUMBER = 67 dir_path = os.path.dirname(os.path.realpath(__file__))8

9 status_table = {}10 active_flows = {}11

12

13 def discover_active():14

15 latest_timestamp = 016 jData = {}17

18 while(True):19 r = requests.get('http://192.168.1.1:8080/sdnmon/table')20 if (r.ok):21 if(r.ok and json.loads(r.content)['timestamp'] > latest_timestamp):22 jData = json.loads(r.content)23 for switch in jData['switches']:24 for entryHash, tuples in

jData['switches'][switch].iteritems():↪→

25 if entryHash not in status_table:26 status_table[entryHash] = {27 'byte_count': tuples['byte_count'],28 'packet_count': tuples['packet_count'],29 'timeout': 030 }31 active_flows[entryHash] = tuples32 else:33 last_pkt = status_table[entryHash]['packet_count']34 last_byte = status_table[35 entryHash]['byte_count']36 # Check whether flow was updated37 if last_pkt < tuples['packet_count'] or last_byte <

tuples['byte_count']:↪→

Page 180: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

162 Chapter 11. Appendix E: User applications

38 status_table[entryHash] = {39 'byte_count': tuples['byte_count'],40 'packet_count': tuples['packet_count'],41 'timeout': 042 }43 active_flows[entryHash] = tuples44 else: # Flow didn't receive an update45 if status_table[entryHash]['timeout'] <

MAX_QUERY_NUMBER:↪→

46 # Flow still under timeout threshold47 status_table[entryHash]['timeout'] += 148 else: # Flow is considered inactive49 if entryHash in active_flows:50 del active_flows[entryHash]51

52 else:53 r.raise_for_status()54 exit()55

56 print "*"57 print "******** Active Flows *********"58 print "*"59 print json.dumps(active_flows,60 indent=4, separators=(',', ': '))61 print "*"62

63 print "Current number of active flows: %s" % len(active_flows)64 time.sleep(jData['interval'])65

66

67 if __name__ == '__main__':68 try:69 discover_active()70 except KeyboardInterrupt:71 print "Got KeyboardInterrupt exception. Exiting..."72 with open(dir_path + '/active_flows_list.json', 'w') as f:73 json.dump(active_flows, f, indent=2, separators=(',', ': '))74 f.close()75 print "Active flow list saved to file!"76 print "Total amount of active flows: %s" % len(active_flows)77 exit()

Page 181: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

163

Bibliography

[1] X. T. Phan, I. Dominguez Martinez-Casanueva, and K. Fukuda, “Adaptiveand distributed monitoring mechanism in Software-Defined networks”, 201713th International Conference on Network and Service Management (CNSM)(CNSM 2017), Tokyo, Japan, Nov. 2017.

[2] O. Networking Foundation, “Software-defined networking: The new norm fornetworks”, Apr. 2012.

[3] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J.Rexford, S. Shenker, and J. Turner, “Openflow: Enabling innovation in campusnetworks”, SIGCOMM Comput. Commun. Rev., vol. 38, no. 2, pp. 69–74, Mar.2008, issn: 0146-4833. doi: 10.1145/1355734.1355746. [Online]. Available:http://doi.acm.org/10.1145/1355734.1355746.

[4] Open Networking Foundation, [Online]. Available: https : / / www .opennetworking.org.

[5] X. T. Phan and K. Fukuda, “Sdn-mon: Fine-grained traffic monitoringframework in software-defined networks”, Journal of Information Processing,vol. 25, pp. 182–190, 2017. doi: 10.2197/ipsjjip.25.182.

[6] Lagopus switch: a high-performance software OpenFlow 1.3 switch, [Online].Available: http://www.lagopus.org.

[7] Virtual Networks over linuX (VNX), [Online]. Available: http://web.dit.upm.es/vnxwiki/index.php/Main_Page.

[8] Linux bridge, [Online]. Available: https : / / wiki . linuxfoundation . org /networking/bridge.

[9] Docker, [Online]. Available: https://www.docker.com.

[10] Linux Kernel-based Virtual Machine, [Online]. Available: https://www.linux-kvm.org/page/Main_Page.

[11] Kernel-based Virtual Machine technology, [Online]. Available: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine.

[12] QEMU, [Online]. Available: https://www.qemu.org.

[13] M. Jones, Virtio: An I/O virtualization framework for Linux. IBM, 2010.[Online]. Available: https://www.ibm.com/developerworks/library/l-virtio/.

[14] Virtio, [Online]. Available: https://wiki.libvirt.org/page/Virtio.

Page 182: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

164 BIBLIOGRAPHY

[15] VietStack, The evolution of IO Virtualization and DPDK-OVS implementationin Linux. VietStack, 2016. [Online]. Available: https : / / vietstack .wordpress.com/2016/01/24/the- evolution- of- io- virtualization-and-dpdk-ovs-implementation-in-linux/.

[16] POX Wiki, [Online]. Available: https://openflow.stanford.edu/display/ONL/POX+Wiki.

[17] Ryu SDN controller, [Online]. Available: https://osrg.github.io/ryu/.

[18] Ryu certified OpenFlow switches, [Online]. Available: https://osrg.github.io/ryu/certification.html.

[19] Open vSwitch, [Online]. Available: http://openvswitch.org.

[20] Data Plane Development Kit (DPDK), [Online]. Available: http://dpdk.org.

[21] Networking Drivers supported by DPDK, [Online]. Available: http://dpdk.org/doc/guides/nics/overview.html.

[22] Yoshihiro Nakajima, Software Stacks to enable Software-Defined Networkingand Network Functions Virtualization. NTT Network Innovation Laboratories,2017. [Online]. Available: https://es.slideshare.net/nakajimayoshihiro/software-stacks-to-enable-sdn-and-nfv.

[23] Lagopus software switch book, [Online]. Available: http://www.lagopus.org/lagopus-book/en/html/index.html.

[24] ovs-vsctl - utility for querying and configuring ovs-vswitchd, [Online]. Available:http://openvswitch.org/support/dist-docs/ovs-vsctl.8.txt.

[25] Ryubook: Integrating REST API, [Online]. Available: https://osrg.github.io/ryu-book/en/html/rest_api.html#integrating-rest-api.

[26] Ryu GitHub repository, [Online]. Available: https://github.com/osrg/ryu.

[27] Query String format, [Online]. Available: https://en.wikipedia.org/wiki/Query_string.

[28] Nir Yechiel, LLDP traffic and Linux bridges. The Network Way – Nir Yechiel’sblog, 2016. [Online]. Available: https://thenetworkway.wordpress.com/2016/01/04/lldp-traffic-and-linux-bridges/.

[29] Linux Bridge patch to to allow forwarding of LLDP frames, [Online]. Available:https://lists.celinuxforum.org/pipermail/bridge/2015-May/009572.html.

[30] Controller with Topo Learning Feature, [Online]. Available: https://github.com/Ehsan70/RyuApps/blob/master/TopoDiscoveryInRyu.md.

[31] Cengiz Alaettinoglu, Traffic Engineering Evolves Part 2: Challenges and SDNto the Rescue. Packet Design, 2017. [Online]. Available: http : / / www .packetdesign.com/blog/traffic- engineering- challenges- and- sdn-to-rescue/.

[32] ONOS GUI Topology View, [Online]. Available: https://wiki.onosproject.org/display/ONOS/GUI+Topology+View.

Page 183: DEVELOPMENT, DEPLOYMENT AND ANALYSIS OF A SOFTWARE …oa.upm.es/49587/1/TFM_IGNACIO_DOMINGUEZ_MARTINEZ_CASAN… · MPLS MultiprotocolLabelSwitching NAPT NetworkAddressPortTranslation

BIBLIOGRAPHY 165

[33] Pktgen, [Online]. Available: http : / / pktgen - dpdk . readthedocs . io / en /latest/getting_started.html.

[34] bmon, [Online]. Available: https://github.com/tgraf/bmon.