circuits provisioning in pionier with autobahn system

42
Circuits provisioning in PIONIER with AutoBAHN system Radek Krzywania [email protected]

Upload: others

Post on 23-Mar-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Circuits provisioning in PIONIER with AutoBAHN system

Radek Krzywania [email protected]

ToC

• Introduction to PIONIER• AutoBAHN deployment in PIONIER network

– Topology– Network abstraction– Reservation process– Potential usage

Introduction to PIONIER

PIONIER Infrastructure

• PIONIER is Polish NREN dedicated to interconnect all research and academicals institutions in Poland

• The infrastructure interconnects 22 MANs and HPC centers

• PSNC is the operator of PIONIER

PIONIER Infrastructure

• 5.300 km of own Fiber Optics Cables• DWDM Adva equipment for L1• Foundry Networks NetIron XMR 8000 series switches for

L2/L3 in 22 MAN centers• Juniper M5 router for L3

PIONIER Infrastructure

MANEth 1 Gb/sSDH 2,5 Gb/s

2x10 Gb/s (2 λ)CBDF 10Gb/s

(2 λ)

PIONIER place in Europe

• PIONIER in Europe after 5 years of operating:– 4th place among EU i EFTA countries in core network

size (Mb/s x km)– 1st place among EU and EFTA countries in core

capacity of the network (Mb/s) (equal to SURFnet)– The highest number of CBDF links in EU countries (4

operating + 4 more planned in short time scale)– 5th place in outgoing traffic and 6th in incoming traffic to

the NREN backbone• Source: TERENA Compendium 2008

AutoBAHN deployment in PIONIER network

Topology

AutoBAHN in PIONIER

• PSNC is an active partner in GEANT2 JRA3 activity (AutoBAHN) since its very beginning

• PIONIER testbed infrastructure was one of the first to deploy AutoBAHN instance for dynamic circuit management

• Testbed equipment is exactly the same as in parallel operational infrastructure

PIONIER topology for AutoBAHN

• NetIron XMR 8000 switches are interconnected with 10Gb/s interfaces

• AutoBAHN Technology Proxy has access to each of the boxes with CLI interface

• The resources are seen as single MPLS cloud

PIONIER topology for AutoBAHN

• Technology Proxy is aware of each piece of equipment in the network (all XMR boxes)

• DM is provided with limited topology information, where only edge switches are present (those connected with end-points or having external connections to neighbour domains)

• IDM topology is similar to the one at DM level, however the network equipment details are hidden and information about neighbour and global topology is included

AutoBAHN deployment in PIONIER network

Network abstraction

Topology Abstraction Process

• MPLS cloud abstraction decreases amount ofinformation about physial network topology

• Only reachability information between domainedge points are provided with additional linksmetrics

• The pathfinding is limited to definition of ingres and egres network node and port

• The intermediate nodes are selectedautomatically by MPLS with limited influance ofadministrator or AutoBAHN system

MPLS cloud issues for AutoBAHN

• AutoBAHN was considered at the beginning of work to have full control over physical network resources

• MPLS cloud abstraction prevents AutoBAHN to see all particular links in the network• The overall capacity of network links must be abstracted, which causes loss of some

information• The control of booked and used network capacity is limited to heuristic accuracy• The pathfinding is limited to defining just source and destination end ports and nodes in

topology

Alternative link capacity constraints

• Ingress/Egress links limit the capacity allowed to reserve in the network– Core network bandwidth is considered to

be infinite– Network may refuse reservation in case

of insufficient bandwidth available• Capacity allowed to reserve is limited with

policy rule– All domain ingress/egress links are

associated with one node– All end points are associated with single

node– Technology Proxy must be able to

translate DM port names to physical ones– Accurate capacity value in policy may

prevent reservation denials– Allows improved capacity control by

network administrators

AutoBAHN deployment in PIONIER network

Reservation process

How Circuits Are Created

• A User wants to have circuit from some end point in GARR, terminated at file server in PIONIER network

• GARR and GEANT2 domains provides their constraints set to PIONIER network with request to schedule reservation to the end point from selected ingress point

How Circuits Are Created

• The request is forwarded to PIONIER DM, where pathfinder process is executed and constraints for local domain are given

• IDM analyze constraints and defines global path attributes, which are send to DM in order to schedule reservation.

• Again pathfinding is performed, and a path of two nodes and four links are given as a result

• Links are validated in Calendar module to confirm resources availability• Then the resources are booked and reservation is scheduled• IDM is informed about successfully created reservation

How Circuits Are Created

• At reservation start time, DM sends request for a circuit implementation to TP

• TP transform DM topology into physical one and contact proper edge nodes to configure end ports of the circuit

• The VLL is routed according to internal MPLS procedures

AutoBAHN deployment in PIONIER network

Potential usage

Potential AutoBAHN users in PIONIER

• SCARIe project– AutoBAHN provides

connectivity for SCARIe research activities, interconnecting radiotelescopes at global scale

– One of the radiotelescopes is located physically next to Toruń city (PL) and is connected directly to PIONIER infrastructure

Potential AutoBAHN users in PIONIER

• iTPV – Interactive Television may require dedicated circuits between data repositories

Potential AutoBAHN users in PIONIER

• Data storage infrastructures – multiple data storage infrastructures distributed in Poland may be connected on demand with dedicated links

Potential AutoBAHN users in PIONIER

• Telemedicine – dedicated circuits for high quality video streaming

• HPC centers interconnectivity in Poland• VLAB – Virtual Laboratories• Interconnectivity dedicated for distributed Projects

Q&A

Thank you

GMPLS/G2MPLS in PIONIER network

Bartosz Belter [email protected] by: Radek Krzywania [email protected] Supercomputing and Networking Center

BRIEF INTRODUCTION TO G2MLPS

What is G2MPLS?

G2MPLS is …• a Network Control Plane architecture that implements the concept of

Grid Network Serviceso GNS is a service that allows the provisioning of network and Grid resources in

a single-step, through a set of seamlessly integrated procedures.• expected to expose interfaces specific for Grid services• made of a set of extensions to the standard GMPLS

o provide enhanced network and Grid services for “power” users / apps (the Grids)

G2MPLS is not …• an application-specific architecture; it aims to

o support any kind of end-user applications by providing network transport services and procedures that can fall back to the standard GMPLS ones

o provide automatic setup and resiliency of network connections for “standard” users

uniform interface for the Grid-user to trigger Grid & network resource actionssingle-step provisioning of Grid and network resources (w.r.t. the dual approach Grid brokers + NRPS-es)adoption of well-established proceduresfor traffic engineering, resiliency and crankback possible integration of Grids in operational/commercial networks, by overcoming the limitation of Grids operating on dedicated, stand-alone network infrastructures

Grid nodes can be modelled as network nodes with node-level grid resources to be advertised and configured(this is a native task for GMPLS CP)

Why G2MPLS?

G2G2

G2

G2

G.I-NNIG.E-NNI

G.O-UNI

G2MPLSNRPS

Vsite A

Vsite B Vsite C

G.O-UNI

G2MPLS goals

G2MPLS will provide part of the functionalities related to the selection and co-allocation of both Grid and network resourcesCo-allocation functionalities• Discovery and Advertisement of Grid + network capabilities and

resources of the participating virtual sites (Vsites)• Service setup / teardown

o coordination with local job scheduler in middlewareo configuration of the involved network connections among the participating

Vsiteso (The network end-point – TNA – might not be specified, if Grid resources are

specified)o resiliency mgmt for the installed network connections and possible recovery

escalation to the Grid MW for job recoveryo advanced reservations of Grid and network resources

• Service monitoringo retrieving the status of a job (Grid transaction) and of the related network

connections

GMPLS/G2MLPS DEPLOYMENT IN PIONIER NETWORK

ADVA FSP 3000RE-II (Lambda Switch)• 15 pass through ports• 6 local ports• 3 physical units

Calient Diamond Wave (Fibre Switch)• 60 ports• 1 physical unit / 4 logical units (switch virtualization)

Foundry XMR NetIron 8000 (Ethernet Switch)• 2 x 4-port 10GE modules (XFP)• 1 x 24-port 1GE module (SFP)• 3 physical units

G2MPLS test-bed – Transport Plane [1]

Three technoloy domains:• LSC• FSC• Ethernet

Interconnections to othertestbeds via GÉANT2GRID sites are emulatedwith the PCs connected to the testbedSuccessful demonstrationof G2MPLS features withDistributed Data StorageSystem (DDSS)

G2MPLS test-bed – Transport Plane [2]

The Control Plane implemented by a set of G2MPLS node controlers• Each of them operates exclusively on a Transport Network element (real or

derived from partitioning)

• Each controller is interfaced to the Transport Network equipment (Southbound Interface) through TL1 (ADVA, CALIENT) and SNMP (Foundry XMR)

• Node controllers run on i386 32-bit platform with Gentoo Linux distribution

Signaling Control Network (SCN)• To transport signaling messages between the CP components

• Each G2MPLS exposes at least one interface on the Signaling Communication Network (SCN) over which the G2MPLS protocol messages flow

• SCN is IP-based with addresses from the private scope. IP tunnelling is used for out of band connectivity between controllers.

G2MPLS test-bed – Control Plane [1]

The configuration of the G2MPLS CP requires mapping of actual physical topology into the configuration files associated with each of the G2MPLS processesDue to complexity of the whole CP design, the following picture covers only a part of the CP configuration (FSC technology domain):

G2MPLS test-bed – Control Plane [2]

25 test-cards divided into three main areas:• LSP signalling

o Validate the components of the stack involved in the LSP signalling:

• G2.RSVP-TE, LRM, TNRC, SCNGW

• G2MPLS call signallingo Validate the components of the stack involved in the call signalling:

• Intra-domain scope: G2.NCC, RC and G2.RSVP-TE

• Inter-domain scope: G2.NCC and G.ENNI-RSVP

• G2MPLS routingo Validate the components of the stack involved in routing:

• Intra-domain scope: G2.OSPF-INNI, G2.OSP-UNI, LRM, SCNGW

• Inter-domain scope: G2.OSPF-INNI, G2.OSPF.ENNI, G2.OSPF-UNI, LRM, SCNGW

• Multi-domain test-bed required to validate some of the features achieved by interconnectinglocal test-beds of PIONIER (Poland) and University of Essex (UK) via GÉANT2 network

G2MPLS in PIONIER – functional tests

LSP signalling testsNo Test Card Test name Status1 G2MPLS-TC-1.1 Network node initialization Passed2 G2MPLS-TC-1.2 Transport Plane notifications from the network node Passed3 G2MPLS-TC-1.3 Setup of one bidirectional LSP Passed4 G2MPLS-TC-1.4 Tear down of one bidirectional LSP from HEAD node Passed5 G2MPLS-TC-1.5 Tear down of one bidirectional LSP from TAIL node Passed6 G2MPLS-TC-1.6 Unsuccessful bidirectional LSP setup (failure in HEAD node) Passed7 G2MPLS-TC-1.7 Unsuccessful bidirectional LSP setup (failure in intermediate node) Passed8 G2MPLS-TC-1.8 Unsuccessful bidirectional LSP setup (failure in TAIL node) Passed9 G2MPLS-TC-1.9 Setup of one bidirectional LSP with advance reservation Passed10 G2MPLS-TC-1.10 Tear down of one bidirectional LSP with advance reservation from HEAD

nodePassed

LSP signalling tests

All tests have been done on the LSC/FSC/Ethernet nodes

Intra-domain G2MPLS call signalling testsNo Test Card Test name Status11 G2MPLS-TC-2.1 Setup of one bidirectional single-domain LSP by G2.NCC module Passed12 G2MPLS-TC-2.2 Teardown of the one bidirectional single-domain LSP by G2.NCC module Passed13 G2MPLS-TC-2.3 Setup of one bidirectional single-domain LSP by G2.CCC module Passed14 G2MPLS-TC-2.4 Teardown of the one bidirectional single-domain LSP by G2.CCC module Passed15 G2MPLS-TC-2.5 Setup of one bidirectional single-domain LSP by G.UNI-GW module Passed16 G2MPLS-TC-2.6 Teardown of the one bidirectional single-domain LSP by G.UNI-GW module Passed17 G2MPLS-TC-2.7 Setup of one bidirectional single-domain LSP by Middleware

WS-Agreement clientPassed

18 G2MPLS-TC-2.8 Teardown of the one bidirectional single-domain LSP by MiddlewareWS-Agreement client

Passed

Inter-domain G2MPLS call signalling testsNo Test Card Test name Status19 G2MPLS-TC-2.9 Setup of one bidirectional inter-domain LSP by G2.CCC Passed20 G2MPLS-TC-2.10 Teardown of the one bidirectional single-domain LSP by G2.CCC Passed

G2MPLS call signalling tests

All tests have been done on the LSC/FSC/Ethernet nodes

Intra-domain G2MPLS routing testsNo Test Card Test name Status21 G2MPLS-TC-3.1 I-NNI G2.OSPF-TE instance initialization Passed22 G2MPLS-TC-3.2 Distribution of TE information through the G.I-NNI interfaces Passed23 G2MPLS-TC-3.3 Distribution of Grid information through the G.UNI and G.I-NNI interfaces Passed

Inter-domain G2MPLS routing testsNo Test Card Test name Status24 G2MPLS-TC-3.4 Routing information exchange between adjacent RAs Passed25 G2MPLS-TC-3.5 Grid information exchange between adjacent RAs Passed

G2MPLS routing tests

All tests have been done on the LSC/FSC/Ethernet nodes

Currently, the Open Source G2MPLS protocol stack supports the representatives from three main technology areas: LSC, FSC and Ethernet• The stack is extendable: quick and simple development of the extensions in

support of different vendors and equipment• Extensions for cheap Ethernet switches expected soon

G2MPLS is developed to support UNICORE, but GLOBUS extensions are expected soon

Summary

G2MPLS allows to run any kind of applications, even not bridged by Grid Middleware. It is possible to connect the application directly to the network through G.OUNI, bypassing existing gateways developed for UNICORE• Corba interfaces allow easy plug&play of external applications in the

G2MPLS frameworkPHOSPHORUS G2MPLS is backward compatible with ASON/GMPLS• Provides „legacy” ASON/GMPLS transport services and procedures• This compliance fosters the possible integration of Grids in operational and/or

commercial networks

Summary

Q&A

Thank you