combo deliverable d6.3 - report describing results of

192
PROPRIETARY RIGHTS STATEMENT THIS DOCUMENT CONTAINS INFORMATION, WHICH IS PROPRIETARY TO THE COMBO CONSORTIUM. NEITHER THIS DOCUMENT NOR THE INFORMATION CONTAINED HEREIN SHALL BE USED, DUPLICATED OR COMMUNICATED BY ANY MEANS TO ANY THIRD PARTY, IN WHOLE OR IN PARTS, EXCEPT WITH THE PRIOR WRITTEN CONSENT OF THE COMBO CONSORTIUM THIS RESTRICTION LEGEND SHALL NOT BE ALTERED OR OBLITERATED ON OR FROM THIS DOCUMENT Deliverable D6.3 - Report describing results of operator testing, capturing lessons learned and recommendations Grant Agreement number: 317762 Project acronym: COMBO Project title: COnvergence of fixed and Mobile BrOadband access/aggregation networks Funding Scheme: Collaborative Project Integrated Project Date of latest version of the Deliverable 6.3: 22-07-2016 Delivery Date: 27 July 2016 Leader of the Deliverable: ADVA File Name: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Version: V1.0 Authorisation code: PU = Public Project coordinator name, title and organisation: Jean-Charles Point, JCP-Connect Tel: + 33 2 23 27 12 46 E-mail: [email protected] Project website address: www.ict-combo.eu

Upload: others

Post on 04-Oct-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COMBO Deliverable D6.3 - Report describing results of

PROPRIETARY RIGHTS STATEMENT

THIS DOCUMENT CONTAINS INFORMATION, WHICH IS PROPRIETARY TO THE COMBO CONSORTIUM. NEITHER THIS DOCUMENT NOR THE INFORMATION CONTAINED HEREIN SHALL BE USED, DUPLICATED OR COMMUNICATED BY ANY MEANS TO ANY THIRD PARTY, IN WHOLE OR IN PARTS, EXCEPT WITH THE PRIOR WRITTEN CONSENT OF THE COMBO CONSORTIUM THIS RESTRICTION LEGEND SHALL NOT BE ALTERED OR OBLITERATED ON OR FROM THIS DOCUMENT

Deliverable D6.3 - Report describing results of operator testing, capturing

lessons learned and recommendations

Grant Agreement number: 317762

Project acronym: COMBO

Project title: COnvergence of fixed and Mobile BrOadband access/aggregation networks

Funding Scheme: Collaborative Project – Integrated Project

Date of latest version of the Deliverable 6.3: 22-07-2016

Delivery Date: 27 July 2016

Leader of the Deliverable: ADVA

File Name: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx

Version: V1.0

Authorisation code: PU = Public

Project coordinator name, title and organisation: Jean-Charles Point, JCP-Connect

Tel: + 33 2 23 27 12 46

E-mail: [email protected]

Project website address: www.ict-combo.eu

Page 2: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 2 of 192 Version: 1.0

Executive Summary of the Deliverable

Work Package 6 is responsible to validate and demonstrate specific examples of both structural and functional, Fixed and Mobile Convergence concepts within the architectures defined by COMBO. This deliverable reports the activities related to the development and implementation of the structural and functional convergence use cases in an integrated demonstration platform which was finally exhibited at a public demonstration event on April 28th in Lannion, France. The report includes the description of the test cases and results of the operator testing, capturing lessons learned and recommendations.

The demonstration components and implementation plans are specified in deliverable D6.2 [1]. The demonstration activities presented herein are structured into:

Structural Convergence – exploring network topology and technologies for fixed lines (including Wi-Fi), mobile backhaul and fronthaul convergence.

Functional Convergence – exploring network functions (for both control and data plane) that can be consolidated into common and unified functionalities enabling the network convergence.

The structural convergence demonstration activities aim at realizing a proof-of-concept and validation of selected candidate technologies for a common transport structure for fixed line access, mobile backhaul and fronthaul, namely the DWDM-centric architecture and the two different flavours of WDM-PON (i.e., wavelength-routed and wavelength-selective), which are detailed in deliverable D3.3 [2].

With respect to functional convergence, major goals are to demonstrate two implementation variants of the COMBO Universal Access Gateway (UAG) – a functional network entity located in the Next Generation Point of Presence (NG-POP) introduced by the COMBO project and thoroughly discussed within WP3. These implementations may be realized in an either centralized or distributed fashion as studied in WP3 as well. Moreover, selected functional convergence (network) elements, namely virtualized Evolved Packet Core (vEPC), universal authentication (uAUT), universal data path management (uDPM), and Caching are demonstrated in integrated use cases.

The conducted demonstrations use a specific embodiment of the designed and developed UAG. Specifically, key fixed and mobile network functions (nowadays operating separately) such as the authentication and data path management are either unified and/or integrated within the same box aiming at providing an effective FMC approach. To this end, the UAG implementation includes a Network Function Virtualization (NFV) server, which allows hosting and instantiating a number of virtualized network functions developed by COMBO partners.

The key achievement of this deliverable is thus to document the successful integration of all demonstration entities specified in D6.2 [1] and overviewed above in a common demonstration platform integrated in Lannion. The conducted experimental and demonstration activities are a proof of concept for the theoretical and conceptual work

Page 3: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 3 of 192 Version: 1.0

undertaken by COMBO. Main findings of the demonstration are summarized and will be fed back to WP3. Last but not least, the public demonstration event culminates the work done in WP6 and entailed an important dissemination and marketing event for COMBO. Obviously, the organization and outcomes of this public event results in several follow-up dissemination activities performed by WP7.

To summarize, the key outcomes and lessons learned from this COMBO experimental work are the following:

Development and experimental validation of selected COMBO FMC concepts from structural and functional convergence perspective within the UAG/NG-POP framework;

Three selected structural convergence technologies were successfully demonstrated, and shown in a distributed scenario with remote connectivity to centralized functions, or used for locally connecting the functional convergence elements;

Developments for both distributed and centralized NG-POP architectures were successfully carried out:

o Distributed NG-POP: adopting a split UAG with collocated Data Plane and Control Plane;

o Centralized NG-POP: adopting a split UAG with CP distant from DP;

UAG architecture support of fixed and mobile CP and DP functions for any access network type (i.e., fixed, mobile, Wi-Fi). Key functional blocks are:

o vEPC: instantiation of EPCs as VNFs in an UAG;

o uAUT: unified authentication function (Wi-Fi and mobile)

o uDPM: multi-path and traffic offloading, effective handover and caching;

The functional entity (i.e., UAG) designed and deployed, regardless of the NG-POP approach, was done aiming at leveraging nowadays networking trends such as :

o Centralized SDN control, in particular in the centralized NG-POP demo. Specifically, negotiated EPS bearers communicate via a NBI to the SDN controller to automatically trigger the backhaul configuration within the aggregation network connecting the RAN and the mobile core;

o Exploitation of the NFV, in both distributed and centralized NG-POP tests cases. Multiple VNFs have been instantiated within a physical UAG element. By doing so, real convergence is attained in specific control and data plane functions through the implemented uDPM and uAUT functional blocks.

Page 4: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 4 of 192 Version: 1.0

List of Authors

Full Name (E-mail) Company – Country Code

Achim Autenrieth ([email protected]) [WPL] [Editor]

Stefan Zimmermann ([email protected]) [Co-editor]

Jim Zou ([email protected])

Bogdan-Mihai Andrus ([email protected])

ADVA – DE

Péter Olaszi ([email protected]) AITIA – HU

Tibor Cinkler ([email protected])

Akos Ladanyi ([email protected])

BME – HU

Ricardo Martinez ([email protected]) [TL]

Manuel Requena ([email protected])

Arturo Mayoral ([email protected])

Ricard Vilalta ([email protected])

CTTC – SP

Zere Ghebretensaé ([email protected]) [TL]

Björn Skubic ([email protected])

EAB – SE

Alberto Pineda ([email protected]) FON – SP

Yaning Liu ([email protected]) JCP – FR

Bertrand Le Guyader ([email protected])

Daniel Abgrall ([email protected])

Xavier Grall ([email protected])

ORANGE – FR

Jose V. Galan ([email protected])

Enrique Masgrau ([email protected])

TELNET – SP

Stefan Höst ([email protected]) ULUND – SE

Page 5: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 5 of 192 Version: 1.0

List of Reviewers

Full Name (E-mail) Company – Country Code

Klaus Grobe ([email protected]) ADVA – DE

Anthony Magee ([email protected]) ADVA – UK

Jose Torrijos ([email protected]) TID – SP

Approval

Approval Full Name – E-mail Company – Country Code

Date

Task Leader Ricardo Martinez

([email protected])

CTTC – SP 22/07/2016

Task Leader Zere Ghebretensaé

([email protected])

EAB – SE 22/07/2016

Task Leader Bertrand Le Guyader

([email protected])

ORANGE – FR 22/07/2016

WP Leader Achim Autenrieth

([email protected])

ADVA – DE 22/07/2016

Technical Leader Stephane Gosselin

([email protected])

ORANGE – FR 22/07/2016

Project Coordinator Jean-Charles Point

([email protected])

JCP-Connect – FR 22/07/2016

Other (PMC, SC, etc.)

Page 6: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 6 of 192 Version: 1.0

Document History

Edition Date Modifications / Comments Author

Internal 0.1 20/05/2016 Document Template & TOC Achim Autenrieth

Internal 0.5 17/06/2016 Review Version Achim Autenrieth

External 1.0 22/07/2016 External Version for Release Achim Autenrieth

Distribution List

Full Name or Group Company Date

PMC Public deliverable (will be made available through

COMBO website) SC

Other

Page 7: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 7 of 192 Version: 1.0

Table of Content

EXECUTIVE SUMMARY OF THE DELIVERABLE .................................................................. 2

LIST OF AUTHORS ................................................................................................................... 4

LIST OF REVIEWERS ............................................................................................................... 5

APPROVAL ................................................................................................................................ 5

DOCUMENT HISTORY ............................................................................................................. 6

DISTRIBUTION LIST ................................................................................................................. 6

TABLE OF CONTENT ............................................................................................................... 7

LIST OF FIGURES ................................................................................................................... 11

GLOSSARY ............................................................................................................................. 13

1 INTRODUCTION .................................................................................................................... 15

1.1 OVERALL OBJECTIVE OF DEMONSTRATION .................................................................................... 16

1.2 OVERALL INTEGRATED DEMO SETUP ........................................................................................... 17

1.2.1 NETWORK DESIGN ......................................................................................................................... 18

1.2.2 NETWORK ADDRESSING SCHEME ...................................................................................................... 19

1.2.3 ACCESS NETWORK CONNECTIVITY ..................................................................................................... 20

1.3 NFV SERVER SETUP ................................................................................................................ 21

1.4 REMOTE CONNECTIVITY ........................................................................................................... 23

1.4.1 REMOTE CONNECTIVITY INFRASTRUCTURE ......................................................................................... 23

1.4.2 OPENVPN EXAMPLE ...................................................................................................................... 24

1.4.3 FIRST VALIDATIONS ........................................................................................................................ 25

1.4.4 REMOTE CONNECTIVITY INFRASTRUCTURE FOR DWDM-CENTRIC DEMO ................................................ 25

2 TEST CASES OVERVIEW AND RESULTS SUMMARY .................................................................. 27

Page 8: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 8 of 192 Version: 1.0

2.1 TEST CASE DEFINITIONS ........................................................................................................... 27

2.2 TEST PLAN TEMPLATE ............................................................................................................. 28

2.3 TEST RESULT OVERVIEW ........................................................................................................... 28

3 DEMO BOOTHS AND DEMONSTRATION SHOW CASES ............................................................ 31

3.1 DISTRIBUTED NG-POP (DNG-POP)........................................................................................... 31

3.1.1 VALIDATION OF THE WS-WDM-PON BASED CONVERGENCE SOLUTION ................................................. 32

3.1.2 VALIDATION OF THE WR-WDM-PON-BASED CONVERGENCE SOLUTION ................................................ 37

3.1.3 VALIDATION OF FLEXIBILITY AND PROGRAMMABILITY OF THE DWDM-CENTRIC-BASED CONVERGENCE SOLUTION

38

3.2 CENTRALIZED/DISTRIBUTED NG-POP (C/DNG-POP) ..................................................................... 42

3.2.1 VALIDATION OF A VEPC INSTANCE WITHIN THE DISTRIBUTED NG-POP .................................................. 43

3.2.2 DEMONSTRATION OF THE SDN ORCHESTRATOR FOR A MULTI-LAYER AGGREGATION NETWORK IN THE

CENTRALIZED NG-POP ............................................................................................................................... 46

3.3 UNIVERSAL AUTHENTICATION AND DATA PATH MANAGEMENT (UAUT & UDPM) ................................ 50

3.3.1 UNIVERSAL AUTHENTICATION DEMONSTRATION ................................................................................. 50

3.3.2 MULTI-PATH ENTITY DEMONSTRATION ............................................................................................. 54

3.4 UNIVERSAL DATA PATH MANAGEMENT (UDPM) & CACHING .......................................................... 60

3.4.1 NETWORK CONTROLLED OFFLOAD AND SMOOTH HANDOVER ............................................................... 60

3.4.2 IN-NETWORK CONTENT CACHING ..................................................................................................... 63

3.4.3 CONTENT PREFETCHING INITIATED BY DE........................................................................................... 65

4 FINAL DEMONSTRATION IN LANNION ................................................................................... 67

4.1 DEMONSTRATION SETUP DESCRIPTION ........................................................................................ 67

4.2 EXPERIMENTAL VALIDATION AND RESULTS ................................................................................... 68

5 RESULTS, LESSONS LEARNED, AND RECOMMENDATIONS FROM INTEGRATED DEMONSTRATION 71

6 CONCLUSION ........................................................................................................................ 77

7 REFERENCES ......................................................................................................................... 80

8 ANNEX 1 - STRUCTURAL CONVERGENCE TEST PLAN AND RESULTS.......................................... 81

Page 9: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 9 of 192 Version: 1.0

8.1 WS-WDM-PON ARCHITECTURE ............................................................................................... 81

8.1.1 WS-WDM_DP_1 ........................................................................................................................ 81

8.1.2 WS-WDM_DP_2 ........................................................................................................................ 88

8.1.3 WS-WDM_DP_3 ........................................................................................................................ 91

8.1.4 WS-WDM_CM_1 ....................................................................................................................... 97

8.1.5 WS-WDM_CM_2 ..................................................................................................................... 101

8.1.6 WS-WDM_CM_3 ..................................................................................................................... 106

8.2 WR-WDM-PON ARCHITECTURE .............................................................................................. 115

8.2.1 WR-WDM-PON_DP_1 ............................................................................................................. 115

8.2.2 WR-WDM-PON_DP_2 ............................................................................................................. 118

8.2.3 WR-WDM-PON_DP_3 ............................................................................................................. 120

8.2.4 WR-WDM-PON_DP_4 ............................................................................................................. 122

8.2.5 WR-WDM-PON_DP_5 ............................................................................................................. 124

8.2.6 WR-WDM-PON_DP_6 ............................................................................................................. 126

8.2.7 WR-WDM-PON_DP_7 ............................................................................................................. 129

8.2.8 WR-WDM-PON_CM_1 ............................................................................................................ 131

8.2.9 WR-WDM-PON_CM_2 ............................................................................................................ 133

8.3 DWDM-CENTRIC ARCHITECTURE .............................................................................................. 135

8.3.1 DWDM-CENTRIC_DP_1 ............................................................................................................. 135

8.3.2 DWDM-CENTRIC_DP_2 ............................................................................................................. 137

8.3.3 DWDM-CENTRIC_DP_3 ............................................................................................................. 139

8.3.4 DWDM-CENTRIC_CM_1 ............................................................................................................ 140

9 ANNEX 2 - FUNCTIONAL CONVERGENCE TEST PLAN AND RESULTS ......................................... 142

9.1 UAG FUNCTIONALITY............................................................................................................. 142

9.1.1 TEST CASE UAG_ TC_1 ................................................................................................................ 142

9.1.2 TEST CASE UAG_ TC_2 ................................................................................................................ 144

9.2 VEPC FUNCTIONALITY ............................................................................................................ 146

9.2.1 TEST CASE VEPC_ TC_1 ............................................................................................................... 146

9.2.2 TEST CASE VEPC_TC_2 ................................................................................................................ 148

9.3 UAUT FUNCTIONALITY ........................................................................................................... 155

9.3.1 TEST CASE UAUT_ TC_1 .............................................................................................................. 155

Page 10: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 10 of 192 Version: 1.0

9.3.2 TEST CASE UAUT_TC_2 ............................................................................................................... 160

9.3.3 TEST CASE UAUT_ TC_3 .............................................................................................................. 162

9.4 UDPM FUNCTIONALITY .......................................................................................................... 166

9.4.1 TEST CASE UDPM_TC_1 .............................................................................................................. 166

9.4.2 TEST CASE UDPM_TC_2 .............................................................................................................. 168

9.4.3 TEST CASE UDPM_TC_3 .............................................................................................................. 170

9.4.4 TEST CASE UDPM_TC_4 .............................................................................................................. 172

9.5 CACHING FUNCTIONALITY ........................................................................................................ 173

9.5.1 TEST CASE CACHING_TC_1 ........................................................................................................... 173

9.5.2 TEST CASE CACHING_TC_2 ........................................................................................................... 175

9.6 CENTRALIZED NG-POP ARCHITECTURE UNITARY TESTS ................................................................... 178

9.6.1 TEST CASE CNGPOP_DP_1 .......................................................................................................... 181

9.6.2 TEST CASE CNGPOP_DP_2 .......................................................................................................... 183

9.6.3 TEST CASE CNGPOP_CP_1 .......................................................................................................... 186

9.6.4 TEST CASE CNGPOP_CP_2 .......................................................................................................... 189

9.6.5 TEST CASE CNGPOP_CP_3 .......................................................................................................... 191

Page 11: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 11 of 192 Version: 1.0

List of Figures Figure 1 Overall Demo Setup and demo components .............................................. 17

Figure 2 Network design ........................................................................................... 18

Figure 3 Addressing plan .......................................................................................... 20

Figure 4 Access connectivity and external connectivity plan .................................... 21

Figure 5 OpenStack NFV Server Virtualization Layers ............................................. 22

Figure 6 Internal configuration of the NFV Server .................................................... 22

Figure 7 Remote connectivity infrastructure for the integration phase ...................... 24

Figure 8 ADVA / CTTC OpenVPN Setup Example ................................................... 24

Figure 9 Remote connectivity of DWDM-centric demo ............................................. 26

Figure 10 Global logical view of demo setups: network entities, technologies and key functions ................................................................................................................... 31

Figure 11 dNG-POP with integrated WS-&WR-WDM-PON ...................................... 32

Figure 12 Setup for the WS-WDM-PON demo platform ........................................... 33

Figure 13 WS-WDM PON and GPON OLT .............................................................. 34

Figure 14 GPON (left) and WS-WDM-PON ONUs ................................................... 34

Figure 15: WS-WDM-PON and GPON equipment deployed in Lannion in (up) Distributed NG-PoP demo booth, uAUT booth (center) and vEPC booth (down) ..... 35

Figure 16 System diagram of WR-WDM-PON .......................................................... 38

Figure 17 Setup for the DWDM centric demo ........................................................... 39

Figure 18 RBSs and optical platform used in the DWDM-centric solution ................ 40

Figure 19 the SDN controller and client laptops used to validate the DWDM-centric solution ..................................................................................................................... 41

Figure 20 Setup for the vEPC deployment in the distributed NG-POP ..................... 44

Figure 21 Integration of the vEPC in the global COMBO distributed NG-POP validation ................................................................................................................................. 45

Figure 22 Physical setup at Lannion Demo Day of the vEPC integration within distributed NG-POP validation .................................................................................. 46

Figure 23 Setup for the SDN orchestrator coordinating aggregation network and vEPC control at the UAG located in the centralized NG-POP ............................................ 47

Figure 24 View of the CTTC Testbed (remotely reached from Lannion) to conduct the centralized NG-POP validation ................................................................................. 49

Figure 25 Architecture of the uAUT .......................................................................... 50

Page 12: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 12 of 192 Version: 1.0

Figure 26 Hardware setup for the uAUT ................................................................... 51

Figure 27 eNodeB, Remote Radio Unit, and Wi-Fi AP in the demonstration ............ 52

Figure 28 Logical connection of the uAUT with the rest of the deployment .............. 53

Figure 29 Logical connection of the uAUT with the rest of the deployment .............. 54

Figure 30 The setup for multi-path management. ..................................................... 55

Figure 31 Management window at UAG (left) and the UE (right) .............................. 56

Figure 32 Router settings at the UE ......................................................................... 57

Figure 33 Set address translation (NAT) for sessions initiated at the UE ................. 57

Figure 34 Firewall settings in the UAG to allow balancing of sessions initiated from the outside to the UE. ..................................................................................................... 57

Figure 35 The request application at the content server ........................................... 58

Figure 36 Transmit application at the UE, sending a sequence of requests to the content server ........................................................................................................... 58

Figure 37 Traffic monitoring for the test setup .......................................................... 59

Figure 38 Setup for the MPE demonstration. ............................................................ 60

Figure 39 Components involved in the Network Controlled Offload demonstration .. 61

Figure 40 Interface selection client application ......................................................... 62

Figure 41 Caching/prefetching test case deployment ............................................... 63

Figure 42 COMBO caching/prefetching system architecture .................................... 64

Figure 43 Experimental validation networking setup ................................................ 68

Figure 44 LTE attach procedure (Wireshark capture trace) ...................................... 69

Figure 45 Logical view of the setup: MLN aggregation backhaul network between LTE eNodeBs and the UAG’s vEPC @ centralized NG-POP ........................................ 178

Figure 46 Physical view of the setup deployed in the CTTC ADRENALINE testbed (Barcelona, Spain) .................................................................................................. 179

Page 13: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 13 of 192 Version: 1.0

Glossary

3G 3rd Generation (mobile service)

AAA Authentication, Authorization, and Accounting

ABNO Application-Based Network Operation

ANQP Access Network Query Protocol

APD Avalanche Photo Diode

API Application Programming Interface

AWG Arrayed Waveguide Grating

BBU Base Band Unit

BER Bit Error Rate

CO Central Office

COMBO

COnvergence of fixed and Mobile BrOadband access/aggregation networks

CPE Customer Premises Equipment

CPRI Common Public Radio Interface

C-RAN Cloud (or Centralized) Radio Access Network

CS Content Server

DWDM Dense Wavelength Division Multiplexing

EMBS Elastic Mobile Broadband Service

eNB evolved Node B (base station)

EAP Extended Authentication Protocol

EPC Evolved Packet Core

EPS Evolved Packet System

FMC Fixed Mobile Convergence (Converged)

FTTH Fibre to the Home

GMPLS Generalized Multi-Protocol Label Switching

GPON Gigabit capable Passive Optical Network

GPRS General Packet Radio Service

GTP Generic Tunnelling Protocol (or GPRS Tunnelling Protocol)

GUI Graphical User Interface

IP Internet Protocol

ITU-T

International Telecommunications Union- Telecommunication Standardisation Sector

KPI Key Performance Indicator

LENA LTE-EPC Simulator/Emulator

LSP Label Switched Path

LTE Long Term Evolution (3GPP standard)

MAC Media Access ControlControl

MME Mobile Management Entity

MPE Multi-Path Entity

MPLS-TP Multi-Protocol Label Switching – Transport Profile

MPTCP Multipath Transport Control Protocol

NFV Network Function Virtualisation

NGOA Next Generation Optical Access

NGPON2 Next Generation Passive Optical Network 2

NG-POP Next Generation Point of Presence

ODL Open Daylight

ODN Optical Distribution Network

ONF Open Networking Foundation

ONU Optical Networking Unit

Page 14: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 14 of 192 Version: 1.0

OOBM Out-of-Band Management

OTN Optical Transport Network

PCE Path Computation Element

PCEP Path Computation Element Protocol

PDN Packet Data Network

P-GW Packet Data Network Gateway

PIANO+ European Commission Framework 7 Program – PIANO+

PTPPTP Point to Point

QCI QoS Class Indicator

QoS Quality of Service

RAN Radio Access Network

RBS Radio Base Station

ROADM Reconfigurable Optical Add Drop Multiplexer

RRH Remote Radio Head

RRU Remote Radio Unit

S11 Reference Point between MME and SGW in LTE

SDN Software Defined Networking

SFP+ Enhanced Small-Formfactor Pluggable

SGi Reference Point between PDN Gateway and the packet data network in LTE

S-GW Serving Gateway

SLA Service Level Agreement

SMSR Side Mode Suppression Ratio

SON Self-Organising Network

TEID Tunnel End Point Identifiers

TP Transponder

UAG Universal Access Gateway

uAUT Universal Authentication

UDP Universal Datagram Protocol

UDPM Universal Data Path Manager

UDR User Data Repository

UE User Equipment

ULL Ultra Low Latency

UMTS Universal Mobile Telecommunications System

VNF Virtual Network Function

VOA Variable Optical Attenuator

WDM-PON Wavelength Division Multiplexing-Passive Optical Network

Wi-Fi Wireless Local Area Network – Commercial name

WLL Wavelength Locker

WSON Wavelength Switched Optical Network

WR Wavelength Routed

WS Wavelength Selective

WSS Wavelength Selective Switch

Page 15: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 15 of 192 Version: 1.0

1 Introduction

This document reports the successful integration of structural and functional convergence components in a single, integrated demonstration held in Lannion. In particular, the document presents the results, lessons learned, and derived recommendation from the operator testing and experimental activities. The document also includes a detailed test plan definition and compilation which was defined as the basis for the integration of demo components in Lannion during March and April 2016. It is worth outlining that the test plan was continuously and iteratively extended and updated. Results and performance parameters of the tests, demonstrations and experimental activities are summarized in the test plan contained in the document’s annex. The structural and functional convergence showcases were realized in a number of demo booths, and presented at the final public demonstration event on April 28th 2016 in Lannion, France.

The document is structured as follows: in the Introduction section, first, the overall demonstration objective is stated. Next, the previous work done for developing the integrated demonstration setup (in Lannion) is described with special attention on the UAG’s NFV server setup. In this regard, it is also detailed the preliminary experiences achieved via the remote connectivity platform which aimed to anticipate and speed up the integration among the different partners’s network functionalities to be eventually connected in Lannion. In brief, section 1 primarily covers and documents the preparation and design activities required for paving the way for the targeted integration phase.

Section 2 addresses the experimental and testing activities conducted (physically) in Lannion. It provides an overview of the planned test cases along with a short summary of the obtained results. For the sake of completeness, the test cases themselves with the full description of the test setup, objective, testing procedure, results and observations are detailed in Annex 1 for structural convergence and Annex 2 for functional convergence.

Section 3 is focused on describing the set of developed demo booth setups and demonstration showcases conducted on the demo event day on April 28th.

In section 4, a summary of the so-called “final demonstration” is provided. In this final demonstration a set of heterogeneous (functional and structural convergence) experimental activities are integrated in a single case study. The purpose of doing that was to perform quantitative experiments covering multiple structural and functional elements of the demonstration platform rolled out in Lannion.

Section 5 summarizes the attained results of the operator testing and experimental activities, lists the lessons learned, and gives a set of recommendations for achieving feasible FMC developments. Finally, Section 6 concludes the document.

Page 16: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 16 of 192 Version: 1.0

1.1 Overall objective of demonstration

Until February 2016, the individual implementation and demonstration of the structural and functional components were conducted standalone by each COMBO WP6 partner. That is, each involved partner was developing and performing the system test for the individual components according to agreed requirements and recommendations but the targeted integration and interworking between pieces provided by different partners was not ensured. To deal with the expected integration, a first remote network integration test was developed, mainly for the functional components. Especially the remote connectivity aimed to validate the VLAN setup, the UAG’s NFV server connectivity, and instantiation of devised Virtual Network Function (VNF) hosted in VM (Virtual Machine) deployed within the NFV OpenStack server. The status of the considered demo components, an initial test plan, and first results were documented in Milestone MS20. Such a milestone and especially the defined test plan was an important tool to more effectively coordinate afterwards the work among partners and prepare the final demonstration.

However, a main challenge and requirement to validate the structural and functional components is the integration in a single integrated experiment. In this sense, recall that the ultimate objective of the demonstration activities defined in COMBO WP6 is to integrate the individual components developed by COMBO partners, and perform proof-of-concept validations and performance evaluations over this platform. This deliverable reports the work of the partners to prepare and execute the aimed integrated demo along with their own contributions and achievements.

Figure 1 shows the overall demo setup with the three structural convergence components:

Wavelength-Switched WDM PON (WS-WDM-PON)

Wavelength-Routed WDM PON (WR-WDM-PON)

DWDM-centric architectures (DWDM-centric)

and the multiple functional convergence components:

Universal Access Gateway (UAG)

Virtual Evolved Packet Core (vEPC)

Universal Authentication (uAUT)

Universal Data Path Management (uDPM)

Caching

Page 17: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 17 of 192 Version: 1.0

Figure 1 Overall Demo Setup and demo components

The implemented COMBO distributed NG-POP (concept described in [4]) gathers in the same location the access network termination of different technologies such as the OLTs and the BBUs, the aggregation switches (i.e., Carrier Ethernet switch and Low Latency switch) and the NFV server. As previously introduced, the NFV server is a key component of the functional demonstration. Indeed, the NFV server enables the implementation of specific UAG (virtualized) network convergent functions along with local virtualized services.

The CPE and antennas are connected to the distributed NG-POP through an optical ring around the city of Lannion. Each WDM access technology uses a dedicated fibre or pair of fibres.

Finally, the demo setup is connected to both the Public Internet and a centralized EPC located in the city of Stockholm (Sweden). The demo supports live data traffic from any UE (i.e., PC, tablet, mobile phone) towards Internet.

1.2 Overall Integrated Demo Setup

The purpose of this section is to detail the whole developed demonstration setup. This encompasses most of the structural and functional convergent solutions being deployed within the COMBO WP6. Such an integrated setup is basically formed by three pillars:

Page 18: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 18 of 192 Version: 1.0

The UAG’s NFV server used to host virtual network functions addressing COMBO targeted functional convergent solutions, e.g., uDPM, uAUT, vEPC, etc. This is described in section 1.3.

Access network connectivity deals with the access infrastructure used to in a convergent way (from structural point of view) transport/backhaul control and data traffic from heterogeneous network elements (e.g., mobile eNodeBs, Wi-Fi APs) towards the UAG. This is reported in section 1.2.3.

The number of network elements and deployed NFV functions is considerable. This forced to devise and plan a detailed L2 (VLAN) and L3 (IPv4) addressing to enable the adequate switching and forwarding of the flows as expected by the conducted experiment/validation. The conceived addressing for the whole demonstration is depicted and described in section 1.2.2.

1.2.1 Network Design

Figure 2 shows the high level design of the UAG’s NFV server and the access network:

Figure 2 Network design

The NFV server implements part of the UAG and provides local virtualized application services. The UAG is built with several VMs, each providing a well-defined function in the UAG:

The common gateway (referred to as Common GW) is the key network function in the implementation of the UAG since it aggregates traffic from different

Access &

Aggregation

Network

NFV Server

Local Application Services

UAG

Operator 2

AP 2 (BME)

Operator 1 EPC

(AITIA)

uDPM-MPE

(ULUND)

Cache Controller

(JCPC)

Operator 1

eNB (AITIA)

Operator 1

AP (FON)

Content Servers

(ULUND)

MPTCP Content Server

(BME)

Cache Server

(JCPC)

Operator 1

Wi-Fi GW

(FON)

Cache

AP

(JCPC)

UE

UE

Operator 2

AP 1 (BME) Operator 2

Wi-Fi GW

(BME)

UE

UE

Common GW

(ORANGE)

uDPM-DE

(BME)

Internet

Remote RADIUS

(FON )

Operator 3 EPC

(CTTC)

Operator 3

eNB (CTTC)

Access VLAN

Interco. VLAN

Service VLAN

UE

Customer IP address allocated by UAG (e.g., EPC)

NAT

Networking Principles- NAT only on Common GW, for internet access

- 1 VLAN & 1 subnet per AN, for data & control

Local IP address allocated by CPE (e.g., CacheAP)

UE

uAUT Proxy

(FON)

Video Server

(CTTC)

Fixed GW

(ORANGE)

Page 19: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 19 of 192 Version: 1.0

access networks and routes traffic towards local applications. The Common GW also provides a NAT function. To this end, it hosts a unique public address and is able to route the traffic towards Internet.

The individual gateways (i.e., Fixed, mobile or Wi-Fi) allow terminating each access network technology. For example, the Operator 1 EPC represents a VM which hosts the Operator 1 P-GW and connects to the Operator 1 antenna.

The uDPM-MPE and the uDPM-DE are VMs dealing respectively with user plane of the uDPM (Multi-Path Entity) and the controlled plane of the uDPM (Decision Engine). For the sake of clarification and completeness uDPM architecture and functional elements devised in COMBO are documented and explained in WP3 functional work deliverables.

The uAUT proxy is a VM which communicates with a centralized remote AAA server.

The UAG has three internal networks; each of them has a dedicated VLAN identifier (herein identified by a different colour):

The red network, which provides internal connectivity between the different access GW and the Common GW

The blue network which connects the Common GW to the internal control VMs and to the Local Application Services

The purple network is used to connect the uDPM-MPE VM.

Finally, the NFV server hosts Local Application Services such as video servers, content server or cache servers, also implemented as VMs.

1.2.2 Network Addressing Scheme

Figure 3 details the network addressing plan designed for the integrated demo:

The Red, Blue and Purple are internal networks of the NFV server and are detailed with their subnet number,

The Orange network is the public Internet network,

The Green networks correspond to six different access networks. For example, the operator 1 P-GW connects to the Operator 1 eNB through the network VLAN110/Subnet 172.17.110.0. The Operator 1 P-GW serves the customer with the pool 10.100.110.128/25.

Page 20: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 20 of 192 Version: 1.0

UE

Common GW

INT0 172.16.0.254/24

INT1 172.18.0.1/24

INT2 194.199.209.60/29

INT3 10.10.10.1/24

Default GW: 194.199.209.57

Explicit IP routes

10.100.110.0/24 via 172.16.0.110

10.100.120.0/24 via 172.16.0.120

10.100.130.0/24 via 172.16.0.130

10.100.140.0/24 via 172.16.0.130

10.100.150.0/24 via 172.16.0.150

Specific routes for MPE

(UE source IP@ based routing)

From 10.100.110.250 via 172.18.0.2

From 10.100.120.250 via 172.18.0.2

Bridging

WL0 & INT0 on BR0

BR0 10.100.130.2/24

Default GW 10.100.130.1

WL0

DHCP allocation:

IP@: 10.100.130.x/24

Default GW: 10.100.130.1

Wi-Fi GW

Operator 2

WL0 INT0

INT2

INT0

AP 1 – Operator 2

BR0

WL0 INT0

UE

INT0 172.17.110.2/24

Default GW 172.17.110.1

eNB configuration

- MME IP@: 172.17.110.2

- S1 binding: 172.17.110.1

LTE0

NAS+GTP-C allocation:

IP@: 10.100.110.x

EPC – Operator 1

LTE0 INT0 INT1

eNB – Operator 1

INT0

INT0 172.17.110.1/24

INT1 172.16.0.110/24

Default GW 172.16.0.254

EPC configuration

- S1 binding: 172.17.110.1

- Customer Addressing:

Pool: 10.100.110.128/25

uDPM-DE

INT0

INT2

INT1

INT0 10.10.10.2/24

Defaut GW 10.10.10.1

Cache Controller

INT0

INT0 10.10.10.3/24

Defaut GW 10.10.10.1

InternetInternet

GW

INT3

uDPM-MPE

INT0

INT0 172.18.0.2/24

Defaut GW 172.18.0.1

Cache Server

INT0

INT0 10.10.10.4/24

Defaut GW 10.10.10.1

MPTCP Content

Server INT0

INT0 10.10.10.5/24

Defaut GW 10.10.10.1

Content Servers

INT0

INT0 10.10.10.6 & .61/24

Defaut GW 10.10.10.1

Inter-GW

172.16.0.0/24

Services

10.10.10.0/24

MPE

172.18.0.0/24

Internet

194.199.209.56/29

UE WL0

AP 2 – Operator 2

BR0

WL0 INT0

INT1

Bridging

WL0 & INT0 on BR0

BR0 10.100.140.2/24

Default GW 10.100.140.1

WL0

DHCP allocation:

IP@: 10.100.140.x/24

Default GW: 10.100.140.1

INT0 10.100.130.1/24

INT1 10.100.140.1/24

INT2 172.16.0.130/24

Default GW 172.16.0.254

DHCP Server

Binding: INT0

Pool: 10.100.130.128/25

GW: 10.100.130.1

Binding: INT1

Pool: 10.100.140.128/25

GW: 10.100.140.1

UE

Bridging

WL0 & INT0 on BR0

BR0 10.100.120.2/24

Default GW 10.100.120.1

WL0

DHCP allocation:

IP@: 10.100.120.x/24

Default GW: 10.100.120.1

Wi-Fi GW

Operator 1

WL0 INT0 INT1

AP – Operator 1

BR0

WL0 INT0

INT0 10.100.120.1/24

INT1 172.16.0.120/24

Default GW 172.16.0.254

DHCP Server

Binding: INT0

Pool: 10.100.120.128/25

GW: 10.100.120.1

UE

INT0 172.17.150.2/24

Default GW 172.17.150.1

eNB configuration

- MME IP@: 172.17.150.2

- S1 binding: 172.17.150.1

LTE0

NAS+GTP-C allocation:

IP@: 10.100.150.x

EPC – Operator 3

LTE0 INT0 INT1

eNB – Operator 3

INT0

INT0 172.17.150.1/24

INT1 172.16.0.150/24

Default GW 172.16.0.254

EPC configuration

- S1 binding: 172.17.150.1

- Customer Addressing:

Pool: 10.100.150.128/25

vlan110

172.17.110.0/24

vlan120

10.100.120.0/24

vlan130

10.100.130.0/24

vlan140

10.100.140.0/24

vlan150

172.17.150.0/24

ETH0 DHCP allocation:

IP@: 10.100.160.x/24

Default GW: 10.100.160.1

Fixed GW

INT1 INT0

Cache AP

(CPE)ETH0

INT0 10.100.160.1/24

INT1 172.16.0.160/24

Default GW 172.16.0.254

DHCP Server Binding: INT0

Pool: 10.100.160.128/25

GW: 10.100.160.1

vla

n1

60

10

.10

0.1

60.0

/24

CTTC

Video serverINT0

INT0 10.10.10.8/24

Defaut GW 10.10.10.1

uAUT Proxy

INT0

INT0 10.10.10.7/24

Defaut GW 10.10.10.1

Figure 3 Addressing plan

As described previously, the Common GW routes traffic towards access, Local Application Services and the Internet. The Common GW is also able to route traffic towards the uDPM-MPE VM thanks to static specific routes (see purple network).

1.2.3 Access Network Connectivity

Figure 4 focuses on the access network and L2 connectivity. The picture preserves the colour code defined previously.

For example, the Operator 1 eNB is connected to the LAN facing interface of the WR-WDM-PON system on the Lambda 2 wavelength. The corresponding WAN facing interface of the WR-WDM-PON is connected to the Carrier Ethernet Switch on the port 1/1/5/1. This port is configured to tag the traffic with the VLAN ID 110 and transport the traffic to the NFV server on VLAN 110.

From an end to end point of view, Operator 1 eNB is logically connected to the Operator 1 P-GW through Lambda 2 and VLAN 110.

Page 21: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 21 of 192 Version: 1.0

1/1/5/1 110U

1/1/5/2 120U

1/1/5/4 140U

1/1/7/1

120, 130, 140, 150, 160 T,200T, 500T,

1001T

1/1/5/7200UTowards

Internet

120

130

140

150

200

Opt0

120, 130, 140, 150, 160 T

200T, 500T, 1001T

eth0

Common GW

(ORANGE)

NAT

Op. 1 EPC (AITIA)

Op. 2 Wi-Fi GW

(BME)

Op. 3 EPC (CTTC)

Op. 1 Wi-Fi GW

(FON)

L2 Virtual

Switch

NFV Server (ADVA)

- UAG

- Virtualized Local Services

194.199.209.60

Router

Firewall

PIR

1/1

Host/OpenVPN: 194.199.209.61

Horizon OpenStack: 194.199.209.59

194.199.209.62

RAN

emulator

(CTTC)

Operator 2

Wi-Fi AP2

(BME)

Operator 1

Wi-Fi AP

(FON)

Operator 1

eNB

(AITIA)

VPN

termination

DWDM-centric System (EAB)

BBUs

GATEWAY

194.199.209.57

Ethernet

Switch PIR

1/1 510

1/2

1/4510 + other

VLANs

51

05

10

1/3

510 + other

VLANs

Untagged

Untagged

Low Latency

Crossconnect

(ADVA)

Absolute

Analysis CPRI

Tester

ONU

lambda 2C1

ONU

lambda 2

WS-WDM-

PON OLT

WS-WDM-PON System (TELNET)

ONU

lambda 3C2

WR-WDM-PON System (ADVA)

ONU

lambda 4

WS-WDM-

PON OLT

WS-WDM-PON System (TELNET)

id1black

lambda1

directly

connected

WR-WDM-PON System (ADVA)

Absolute

Analysis CPRI

Tester

JDSU CPRI

Tester

JDSU CPRI

Tester

1/1/5/3 150T, 160TGPON

(TELNET)GPON ONU

(TELNET)

Cache AP

(JCPC)

160U 150T160T

1/1/1/1

Content Server

(ULUND)

500U

EGX

Carrier

Ethernet

Switch

(ADVA)

Cache Controller

(JCPC)

Content Server

(ULUND)

uDPM-MPE

(ULUND)

MPTCP Content

server (BME)

1001

500

500

500

1002

1002

vmk0Management

Switch

1001T

1001

1001

1001

GPON ONU

(TELNET)

150U

C2

C1

N2

N1

Cache Server

(JCPC)

L2

Physical

Switch

MNGT

MNGT

MNGT

MNGT

MNGT

PC GUI

ADVA OLT

MNGT

Management Network:

192.168.151.128/25

ADVA WDM OLT: 192.168.151.130

ADVA EGX: 192.168.151.160

ADVA ULL:192.168.151.161

ADVA PC GUI OLT: 192.168.151.131

Telnet WDM OLT:192.168.151.140

Telnet GPON OLT:192.168.151.141

uDPM-DE (BME)500

Video Server

(CTTC)

120U

200T

VLAN number

U for Untagged port

T for Tagged port

1/1/8/1 Opt1

110T, 1001T

160

Fixed GW

(ORANGE)

1001

uAUT Proxy (FON)500

110T

Stand-Alone

Local Services

194.199.209.xx : Public IP@

Operator 2

Wi-Fi AP1

(BME)

ONU

lambda 1

WS-WDM-

PON OLT1/1/5/6 130U

500

Figure 4 Access connectivity and external connectivity plan

Figure 4 also describes the management interfaces for each network node and the corresponding IP addresses. The complete demo setup uses four public IP addresses: two addresses for the data plane (UAG data plane and DWDM centric System data plane) and two addresses for remote management plane (remote connection on the UAG).

The CPRI traffic is emulated with CPRI testers. The traffic is connected to the Low Latency Switch through the two WDM-PON systems.

The “Pôle Image et Réseau” (PIR) at Lannion is the venue hosting the final demonstration. This institution provided the network facilities for the connection to the Public Internet via an Ethernet Switch for traffic aggregation and a Router/firewall with a dedicated public pool for external connection.

1.3 NFV Server Setup

The following figures depict details on the status of the NFV server installation and configuration. Figure 5 shows the modular installation of the software and the virtualization layers used for the UAG NFV server setup. Figure 6 shows details of the underlying network implementation of the data and control paths inside the NFV server:

Page 22: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 22 of 192 Version: 1.0

Figure 5 OpenStack NFV Server Virtualization Layers

Carrier Ethernet Switch

Openstack domain

VMware L1/L2 Forwarding

Openstack

Control Node

Openstack

Compute Node

Management

Portal

Managament Private 192.168.151.128/25

.153 .154

Internet

194.199.209.56/29

.61.151

eth2eth2 eth1 eth2

Openstack

Internal

eth1

eth2 eth1

OpenVPN

.150

Vmware

Mngmt

Openstack

Services

eth3

Forwarding

.59

OpenSSHOVS-based Openstack

VLAN provider serviceOpenstack

Dashboard

eth3

Dedicated VM

vEPC

vEPC

Licence

usb1

accessA

Service VMService VMService VMService VMService VMNetwork VM

services

internet

usb1

.57

Management

NetworkStandalone

Application

Servers

accessBOptical

Access

Network

servicesinterGW

accessA

interGW

accessB

USER

TRAFFIC

eth1eth1

Opt2Opt1

Control & Data Planes

Management Plane

Internet

accessX

internet

NFV Server(VMware Bare Metal

Hypervisor)

Figure 6 Internal configuration of the NFV Server

Page 23: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 23 of 192 Version: 1.0

COMBO WP6 partners have their own user profiles and partially admin rights on the NFV server. This allows them to set up and configure VNFs being designed and implemented to actually enable a targeted functional convergence functionality/network entity. In this regard, the NFV server has been used to implement the key components of the UAG (uAUT and uDPM) and other NG-POP components. More details on the developed VNFs by the COMBO partners and installed in the NFV server are contained in section 3.

1.4 Remote Connectivity

The final demonstration (where all planned experimental activities are integrated and interconnected in Lannion) takes into account a number of network elements (e.g., Wi-Fi access points, LTE eNodeBs, PON ONTs and OLTs, Carrier Ethernet VLAN-cross-connect, NFV server, etc.) from different partners / vendors. These have their own features and peculiarities. Thereby, it is a complex, time-consuming and laborious activity requiring many configuration aspects to attain the targeted whole system fully operative. To accelerate this process and not start the interconnection until physically all partners are in Lannion bringing their equipment, it was conceived the utilization of an infrastructure enabling remote connectivity between partners from their locations. In other words, a preliminary interconnection infrastructure was deployed for debugging potential problems and performing valuable interoperability tests between network elements, functions, etc. provided by different partners. To do that, the OpenVPN tunnels was created and deployed over public Internet as described below.

1.4.1 Remote Connectivity Infrastructure

Figure 7 shows the logical representation of the interconnection provided by the so-called “remote connectivity infrastructure”. As aforementioned, the intention was to enable the connectivity of different equipment technologies hosted in different EU locations (i.e., Spain, Germany, Hungary, Sweden, France) that were used in part of the final targeted demonstration. Specifically, such equipment is related with the distributed NG-POP validations. By doing so, we had preliminary and valuable debugging phase and tests. In addition, involved partners gained experience and familiarity with the NFV sever deployed into the UAG where the VNFs supporting the test cases (defined in section 9) are instantiated. For the sake of clarity, an LTE eNodeB placed at CTTC (Spain) has been connected (via control and user planes) to the vEPC instantiated at the UAG’s NFV server physically placed at ADVA (Germany):

Page 24: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 24 of 192 Version: 1.0

Figure 7 Remote connectivity infrastructure for the integration phase

The selected tool for creating this private infrastructure over the Public Internet is based on the open source solution of the OpenVPN application. Basically, OpenVPN provides a secure VPN enabling point-to-point IP tunnels between a server and a number of clients regardless their location.

1.4.2 OpenVPN Example

Figure 8 shows an example for an OpenVPN setup between ADVA and CTTC premises. In this setup, CTTC hosts the OpenVPN server responsible to grant the connectivity for, in this case, ADVA OpenVPN client via a pre-negotiated certificate. Such certificates are used for ensuring the trust between the OpenVPN client and server. Furthermore, the server (at CTTC) also assigns individual private IP addresses (within the selected range) to be used by the OpenVPN clients:

Figure 8 ADVA / CTTC OpenVPN Setup Example

Page 25: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 25 of 192 Version: 1.0

1.4.3 First Validations

Following with the above example (i.e., CTTC and ADVA setup) preliminary validations of the OpenVPN infrastructure creation were conducted. It is important to stress that such preliminary tests aimed just to verify that the deployed infrastructure enabled the remote connectivity among different partner’s location. In this regard, the validation of the remote connectivity between network equipment (located at different location) and the UAG (at ADVA premises) allowed emulating a similar scenario as the one planned for the final demonstration. Recall that for the final demonstration all the network elements brought by the involved partners would be shipped to Lannion and physically connected at the demo venue. It was also a requirement, that partners were able to access their equipment in Lannion from their remote location during the integration as well was during the experimental and demo phase. The validations ensured the correct operation of the OpenVPN infrastructure.

Bearing the above clarification in mind, we conducted basic networking/connectivity ICMP ping tests. Such tests allow verifying that CTTC server (e.g., where the LTE eNodeB – RAN - is launched) is able to reach the subnet @ ADVA (192.168.151.128/25) where the CTTC vEPC is instantiated. In other words, at ADVA’s UAG NFV server, the vEPC built by the CTTC is instantiated, and the CTTC eNodeBs can reach the mobile core elements using the OpenVPN tunnels. The connectivity between CTTC eNodeBs and the vEPC at the ADVA’s UAG takes a delay (used by basic ICMP ping) around 70ms.

To summarize, at the ADVA subnet, the NFV server is located in a DMZ. Thus partners’ access devices (e.g., eNodeBs, Wi-Fi APs, CacheAP) may use the created OpenVPN tunnels as preliminary tests to accelerate expected debugging process of the VNFs, verify the correct operations between access devices and VNFs, etc. In brief, the remote connectivity infrastructure was deployed to avoid delaying very first integration phase until all the partners were at Lannion for the planned final demonstration. By doing so, we were able to detect some problems that could be fixed prior to move all the equipment and staff to Lannion.

1.4.4 Remote Connectivity Infrastructure for DWDM-centric Demo

The Radio Base Stations (RBSs) that were connected to the DWDM-centric demo set-up in Lannion were remotely connected to the mobile core network in Stockholm Sweden over an IPsec tunnel as shown in Figure 9. After installing and configuring the firewalls on both sides of the connection the first connectivity test between the two firewalls was conducted pinging the NTP and DNS servers in the EPC in Stockholm:

Page 26: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 26 of 192 Version: 1.0

Figure 9 Remote connectivity of DWDM-centric demo

The communication between both sites was tested successfully with an average delay of 50 ms between the two premises. It should be noted, however, that the 50 ms delay, should be seen as a baseline delay, i.e., the best delay that can be achieved for this connection, as there was no other traffic sharing the link to the Internet during the time of measurement. In the actual demo, the link to the Internet has been shared among the different partners, therefore increasing the delay to the core network.

Page 27: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 27 of 192 Version: 1.0

2 Test cases overview and results summary

As aforementioned, the targeted test cases were determined and documented as a test plan in Milestone MS20. As the operator testing and demonstration done in WP6 involved components developed by multiple COMBO partners, the test case specification was an important tool to ensure the interoperability and verify the correct operation and performance of individual test elements. During the integration phase, first testing was done on the individual components, followed by the execution of the fully integrated tests.

The full test plan and obtained results with a detailed description of the key observations and performance measurements are contained in the Annex sections. Nevertheless, in this section 2, the methodology of the test case definitions, the agreed test plan, and an overview of the test results is given.

2.1 Test Case Definitions

The following contains the demo description and test plans for structural and functional convergence test groups. The structural test groups include DWDM centric, wavelength-selective (WS) and wavelength-routed (WR) WDM-PON architectures. The functional tests are divided into test groups for centralized NG-POP and distributed NG-POP architectures. The following table reflects the defined test groups and subgroups:

Table 1 Test groups, subgroups, and codes

Test group name Test subgroup name Test subgroup code

WS-WDM-PON Architecture Data Plane WS-WDM_DP_

Control and management WS-WDM_CM_

WR-WDM-PON Architecture Data Plane WR-WDM_DP_

Control and management WR-WDM_CM_

DWDM-Centric Architecture Data Plane DWDM-Centric_DP_

Control and management DWDM-Centric_CM_

Distributed NG-POP UAG UAG_TC_

vEPC vEPC_TC_

uAUT uAUT_TC_

uDPM uDPM_TC_

Page 28: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 28 of 192 Version: 1.0

Caching Caching_TC_

Centralized NG-POP Data Plane cNGPOP_DP_

Control Plane cNGPOP_CP_

2.2 Test Plan Template

To harmonize and align the reporting of test cases constituted the targeted test plan, a fixed template was defined for all test cases. Such a template contains the following items:

- Description of the item under test

- Test report (pass / fail)

- Purpose of the test

- Use case covered by the test according to use cased specified in D2.1

- Preconditions

- Test equipment

- Test procedure

- Pass-fail criteria

- Observation

The observation contains a detailed description of the collected test results typically with measured performance data, screenshots, or Wireshark traces.

2.3 Test result overview

In Table 2 the pass / fail test results are summarized. It must be stressed, that most tests contain additionally extensive quantitative measurements of key performance parameters. The full results are contained in the observation section of each test case, and a summary of the results in given in section 5.

Table 2 Test results

Test Case Test description Pass Fail

WS-WDM_DP_1 WS-WDM-PON physical and link layers connectivity evaluation

WS-WDM_DP_2 CPRI traffic transmission performance over 10G WS-WDM-PON ONU

Page 29: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 29 of 192 Version: 1.0

Test Case Test description Pass Fail

WS-WDM_DP_3 Evaluate the WS-WDM-PON system with GPON overlay on the same ODN

WS-WDM_CM_1 Validation of the Out-of-Band (OoB) management control plane of the WS-WDM-PON system

WS-WDM_CM_2 Validation of the In-Band management (IBM) control plane of the WS-WDM-PON system

WS-WDM_CM_3 IBM quality link monitoring

WR-WDM-PON_DP_1 OLT optical interface feeder characterization

WR-WDM-PON_DP_2 Gigabit Ethernet transmission over 1 Gigabit WDM-PON ONUs

WR-WDM-PON_DP_3 WDM-PON optical budget measurement

WR-WDM-PON_DP_4 WDM-PON maximum reach evaluation N/A N/A

WR-WDM-PON_DP_5 CPRI traffic transmission over 10G WDM-PON ONU

WR-WDM-PON_DP_6 CPRI transport, EVM and jitter measurement

WR-WDM-PON_DP_7 Transport of LTE signal radio and frequency deviation

WR-WDM-PON_CM_1 1 Gb/s WDM-PON ONUs connection to the OLT via the Remote Node

WR-WDM-PON_CM_2 10G WDM-PON ONU tunability capabilities

DWDM-Centric_DP_1 Evaluation of filters used to block interference signals at the WSS ports in the DWDM-centric solution

DWDM-Centric_DP_2 Evaluation and selection of the transport of synchronization signal delivery to the RBSs

DWDM-Centric_DP_3 Evaluation of the connectivity and configuration of the VPN, the RAN and the transport network to EPC VPN connection

DWDM-Centric_CM_1 Evaluation of the RAN and transport controllers implemented in the demo set-up

UAG_TC_1 Validation of remote connectivity to the UAG via OpenVPN

UAG_TC_2 Validation of L2 access, interconnection and service VLAN connectivity

vEPC_TC_1 Validation of virtual EPC functionality

vEPC_TC_2 Validation of virtual EPC functionality of a second EPC operator

Page 30: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 30 of 192 Version: 1.0

Test Case Test description Pass Fail

uAUT_TC_1 Validation of the integration between the authentication in the mobile network and the Wi-Fi network

uAUT_TC_2 Validation of the EAP authentication in the Wi-Fi network with the credentials of the SIM card.

uAUT_TC_3 Evaluation of the correct operation of the Hotspot 2.0 technology.

uDPM_TC_1 Test Wi-Fi to LTE handover

uDPM_TC_2 Test Wi-Fi to Wi-Fi handover while LTE connectivity is not available

uDPM_TC_3 Test Wi-Fi to Wi-Fi handover while LTE connectivity is available as well

uDPM_TC_4 Test MPE with dual connectivity of UE to UAG via both Wi-Fi and LTE.

Caching_TC_1 Test the converged caching functionality enabled by CacheAP in access network and NGPoP in aggregation network

Caching_TC_2 Test the prefetch functionality that is controlled by CC and initiated by DE

cNGPOP_DP_1 Validate transport of EPS Bearer over MPLS/WSON

cNGPOP_DP_2 Validation of the MPLS-TP and WSON configuration

cNGPOP_CP_1 ABNO – EPC coordination

cNGPOP_CP_2 Measurement of the Configuration time for provisioning mobile EPS services

cNGPOP_CP_3 Use of CTTC ANBO GUI

Summarizing, out of the 37 planned tests, 36 were executed in the integrated demonstration and successfully passed. The high success rate and the quality of the measured KPIs exceeded the expectations, as the integration of components developed distributed by multiple COMBO partners was highly complex.

Page 31: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 31 of 192 Version: 1.0

3 Demo Booths and Demonstration Show Cases

Figure 10 gives an overview the network elements encompassing a number of integrated access elements and technologies (i.e., Wi-Fi, LTE, WDM-PON, etc.) along with the pool of deployed network functions and applications used to validate targeted functional and structural convergent objectives defined within COMBO. To do that, it was defined different “demo booths” which aim at (publically) showcasing the attained FMC solutions during the Lannion final COMBO demonstration day. In the following subsections, it is generically described each of the demo booths outlining the main pursued objectives, the deployed setup and the sequence of steps made to demonstrate / validate the targeted functionality:

Figure 10 Global logical view of demo setups: network entities, technologies and key functions

3.1 Distributed NG-POP (dNG-POP)

The distributed NG-POP (dNG-POP) features optical access nodes, Base Band Units of mobile access nodes, Universal Access Gateway (unified subscriber IP edge) and local application services in the Main CO.

The dNG-POP booth hosted three demonstrations addressing different features of structural convergence, using the WS-WDM PON, WR-WDM-PON and DWDM-

Page 32: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 32 of 192 Version: 1.0

Centric demo platforms. These platforms were selected to reflect different aspects of structural convergence and the objective of the demonstrations was to validate the three platforms and demonstrate the different realization of structural demos.

In the following, we present a short description of the demo deployments and briefly describe the process with which some key structural convergence features of the proposed solution are validated.

Figure 11 shows the dNG-POP demonstrator with integrated WS- & WR-WDM PON:

Figure 11 dNG-POP with integrated WS-&WR-WDM-PON

3.1.1 Validation of the WS-WDM-PON based convergence solution

A high level architectural view of the WS-WDM-PON demo platform is shown in Figure 12. The figure set-up represents the wavelength selective (WS) WDM-PON solution developed in COMBO for structural convergence purposes in the access segment. The solution is based on ONUs with tunable transmitters and receivers, it includes legacy GPON system integration and CPRI transport capabilities for fronthaul purposes in the same optical distribution network (ODN) based on power splitters, aiming to be compatible with legacy FTTH deployments. One WS-WDM-PON OLT, one GPON OLT, two 10 Gb/s tunable WS-WDM-PON ONUs, two GPON ONUs, and two CPRI interfaces and CPRI tester were deployed and demonstrated in Lannion. The two WS-WDM-PON ONUs were used to provide connectivity with the UAG to the universal authentication (uAUT) and data path management (uDPM) COMBO functional demos in Lannion. The two GPON ONUs were deployed to provide connectivity with the UAG to the caching and the virtual EPC (vEPC) functional demos in Lannion. The two CPRI

Page 33: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 33 of 192 Version: 1.0

interfaces were used to connect to a CPRI tester running on a PC to demonstrate CPRI fronthaul traffic transport over this platform:

Figure 12 Setup for the WS-WDM-PON demo platform

A picture of the GPON and WS-WDM-PON OLTs is shown in Figure 13. The WS-WDM-PON OLT is composed of:

WS-WDM-PON OLT switch based on DWDM 10G SFP/SFP+ C-band transceivers supporting bit rates of up to 10 Gb/s.

WDM Mux/Demux equipment: includes all the passive (AWGs, circulator) and active (pre and booster amplifiers) and the rest of components that compose the WS-WDM-PON OLT (see diagram in Figure 12).

Page 34: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 34 of 192 Version: 1.0

Figure 13 WS-WDM PON and GPON OLT

A picture of GPON and WS-WDM-PON ONUs is shown in Figure 14. In the WS-WDM-PON ONU, tunable SFP+ (T-SFP+) components are used which are also capable for bit rates of up to 10 Gb/s. Then, the solution is also compatible with CPRI line rates required for fronthaul links, including CPRI line bit rate option 7 (9830.4 Mbit/s).

Figure 14 GPON (left) and WS-WDM-PON ONUs

The T-SFP+ components of the ONU do integrate C-band tunable lasers and fixed APD receivers. The wavelength control and stability of each particular laser is achieved by means of integrated wavelength locker (WLL) on each T-SFP+.

The tunability in the downstream on each WS-WDMPON ONU is achieved by the use of a low-cost NG-PON2 tunable filter (TF) that is integrated in the ONU. Those filters are typically 4-8 channel tunable filters, but can be used for WS-WDM-PON too. The tunability of both laser and filter is managed and controlled from the ONU. Target fibre reach of the solution is that of the typical reach of GPON systems, which is 20 km. In order to achieve that reach with relatively good power budget and splitting ratios of at least 1:32, variable optical amplifiers of gain G = 15-20 dB are used at the OLT side.

The WS-WDM-PON system was integrated with a CPRI tester to demonstrate the transport of 2.5G CPRI links over the proposed system together with 10G fixed access links under a fixed-mobile convergence scenario. For achieving this, the 2.5G CPRI interface of the CPRI tester has been connected to one of the DWDM SFP transceivers of the WDM-PON system. One of the ONUs has then been connected to a second CPRI interface of the CPRI tester. This particular ONU provides a compatible 2.5G CPRI interface to fully interoperate with the CPRI tester.

Figure 15 depicts a picture of the WS-WDM-PON and GPON equipment deployed in Lannion. The picture shows the GPON and WS-WDM-PON OLTs and central office equipment deployed in the distributed NG-PoP booth, as well as the GPON and WS-WDM-PON ONUs and user equipment deployed in some of the other booths in which the functional demos were running, e. g. the uAUT and the vEPC functional demos.

Page 35: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 35 of 192 Version: 1.0

Figure 15: WS-WDM-PON and GPON equipment deployed in Lannion in (up) Distributed NG-PoP demo booth, uAUT booth (center) and vEPC booth (down)

As explained before, the two WS-WDM-PON ONUs were used to provide connectivity with the UAG to the universal authentication (uAUT) and data path management (uDPM) COMBO functional demos in Lannion, and the two GPON ONUs were

Laptop1: UE (Video Client) and Emulated RAN (LTE eNodeB)

Laptop2: Video Server (emulates Internet Access)

GPON ONU: enabling access to the UAG (i.e., vEPC)

Page 36: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 36 of 192 Version: 1.0

deployed to provide connectivity with the UAG to the caching and the virtual EPC (vEPC) functional demos in Lannion.

One of the main challenges for the data plane was to demonstrate relatively good power budget and good system performance with the proposed WS-WDM-PON system, which is of extreme importance for its reliability with power splitting ratios used in legacy GPON technology (e.g. 1:32 and higher). According to the optical specifications and constraints of the transceivers that are used in the proposed WS-WDM-PON scheme, the system requires a high power budget of about 33-35 dB for splitting ratios of 1:32. Optical amplification was required at the OLT. In this way, different variable optical amplifiers were used at the OLT (booster high power amplifier and low-noise preamplifier for downstream and upstream respectively) and their gain and related parameters were optimized, in order to reduce cost and power consumption whilst also taking into account other limitations like the laser safety.

The main innovation of the proposed solution is the first demonstration of a WS-WDM-PON system which is capable for full ONU tunability at both transmitter and receiver sides, and which is able to operate under a power splitter ODN, thus also being compatible with FTTH-GPON deployments.

Once the data plane was validated, and WS-WDM-PON network was successfully integrated and running together with GPON overlay and CPRI transport capabilities, the control plane validation of the WS-WDM-PON solution was based on the following procedure:

First, Out-of-Band Management (OOBM) control was configured in both OLT and ONUs to automatically start once the ONUs are plugged in for the first time. For this, the different state machines of both OLT and ONUs on which the OOBM software was implemented were set to do so. In this way the OLT automatically informs the ONUs what channels are available prior to start lasing and receiving on a random channel. OOBM was implemented by the use of a dedicated channel (two wavelengths): one control wavelength for downstream (LambdaC-DS) and one for upstream (LambdaC-US), which are be equal for all ONUs. OOBM makes possible all ONUs to know available transmitting and receiving wavelengths once connected to the network.

Second, In-Band Management (IBM) control was configured to be able to start once OOBM procedure is set and the link is established for a particular ONU. For this, the different state machines of both OLT and ONUs on which the IBM software was implemented were set to do so. IBM was implemented via control/management frames interleaved with transport traffic on working wavelengths. IBM controls dynamic wavelength change once the link is stablished after OOBM procedure is set in case an ONU wants to change to other wavelength if a particular event occurs, or in case the OLT needs to notify the ONU it must be tuned to other wavelength also in case another particular event occurs.

Page 37: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 37 of 192 Version: 1.0

When in IBM, a link drop was forced on each of the WS-WDM-PON ONUs to force the ONU to automatically tune to a different available channel thanks to the developed control plane. This procedure, which takes very short time (of the order of few seconds) was shown by analysing the counters on this particular ONU and also by executing different scripts associated to the corresponding functions implemented in the IBM in order to show the actual channel that the ONU is operating before and after forcing the link drop.

After validation of the WS-WDM-PON control plane, the validation of the GPON overlay capabilities over the WS-WDM-PON network was also validated. In this way, the different GPON ONUs were attached to the network, and also the GPON OLT. The GPON OLT was first configured using the Telnet GPON Management system (TGMS) tool by connecting a PC to the management Ethernet port of the OLT. Using TGMS, a valid profile for each GPON ONU was then created, and their corresponding V-LAN tagging configuration, IP address, service provisioning and other parameters were then configured. After the configuration was done, both GPON ONUs were then switched on. As the configuration of each ONU was successfully done first, all of them could negotiate the activation with the OLT automatically, and both were provisioned successfully. Once the ONUs were recognized and authorized, using again TGMS connected to the OLT, the credentials for each ONUs were then shown, demonstrating that both ONUs were then running and were able to transmit and receive traffic to and from the OLT, as well as OMCI service provisioning was performed perfectly.

In the last phase of the validation, the CPRI transport capability of the WS-WDM-PON platform was demonstrated. A CPRI tester from Orange was used to launch and receive traffic from the OLT side to the ONU side. Two different 2.5G CPRI interfaces were plugged into the CPRI tester and the WDM-PON switched and connected through optical fibre to the ends of the WS-WDM-PON network. The CPRI tester automatically recognized both CPRI interfaces and automatically showed the latency and the bit-rate of the two CPRI interfaces in its graphical user interface.

3.1.2 Validation of the WR-WDM-PON-based Convergence Solution

As structural convergence in the access and aggregation segment requires not only more capacity, but also extensive reach and potential transparency, the wavelength division multiplexing passive optical network (WDM-PON) is therefore proposed to handle the fixed access, Wi-Fi backhaul, and mobile backhaul/fronthaul. The WDM-PON technology is able to provide the high capacity demand of next generation fixed and mobile services, and also guarantee a smooth evolution of the legacy access networks. WR-WDM-PON adopts a low-cost implementation of the ONU laser, while the functionality of wavelength locker is centralized at the OLT D6.2 [1]. A cyclic WDM multiplexer/de-multiplexer at the remote node then routes a single wavelength to the corresponding ONU. Such a WDM-PON solution is especially suited to the aforementioned requirements of a converged infrastructure with regard to the bandwidth × reach product (e.g. bandwidth of up to 10 Gb/s per wavelength and reach of > 50 km), which are not supported by today’s existing WDM-PON approaches.

Page 38: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 38 of 192 Version: 1.0

Figure 16 System diagram of WR-WDM-PON

Figure 16 depicts the demonstration setup of the WR-WDM-PON system. The OLT with centralized wavelength locker, two 1 Gb/s tunable ONUs and one 10 Gb/s tunable ONU, were deployed and demonstrated in Lannion, where two 1G ONUs terminated an eNB and a Wi-Fi AP, respectively, and the 10G ONU was adopted for the 10 Gb/s CPRI link transmission. In the downlink, three L-band wavelengths were used to deliver the data streams from the Carrier Ethernet Switch and the Ultra-low latency Cross Connect switch, as shown in Figure 1. Such a switch is necessary for dynamically routing data to different ONUs, according to the change of traffic patterns. As the optical fronthauling provokes a stringent latency requirement, only all-optical or very low-latency electrical switches may be applicable in this case. Measurements of the latency of the switch used in the demonstration can be found in [2].

The wavelengths were then combined at the OLT by an L-band DEMUX. After transmission over 18 km span of field-deployed Lannion Fibre Ring, the downstream wavelengths were filtered for the corresponding ONUs by a cyclic AWG, and the received data of each ONU were further processed by the corresponding user equipment for the functional demos. Similarly in the uplink, different C-band wavelengths were emitted at the ONUs to transport the upstreams, and combined by the same cyclic AWG. To compensate the extra link loss of the Lannion Fibre Ring, an EDFA module with 10 dB gain was used only for the uplink at the OLT. The centralized wavelength control at the OLT was facilitated by imprinting distinct CW pilot tones (i.e. channel labels) onto the upstream wavelengths to enable calculation of individual feedback signals for each ONU for signal power and relative wavelength deviation. The feedback signals were then transferred back to the ONUs by an auxiliary management and control channel, which was also implemented by a pilot tone channel in the downstream carrying low-bitrate laser control frames.

The ONU and OLT prototypes implementing the centralized wavelength control worked very well and stable throughout the whole demonstration event. For a large scale deployment, the tuning time should be improved, which was several seconds.

3.1.3 Validation of Flexibility and Programmability of the DWDM-centric-based Convergence Solution

A high level architectural view of the DWDM centric demo platform is shown in Figure 17. The set-up represents a mobile network serving business and residential areas. It

Tx Array1

L C

3

DE

MU

X

Rx Array

Cyclic

AW

G

MU

XONU 2

T-LD

Rx

L C

ONU 1

T-LD

Rx

L C

2

1

3

2

OLTONU

Lannion Fiber Ring18 km

C-band

L-band

ONU 3 (10G)

RN

Wave

Locker

+ DSP

VOA

10%

90%

UAG

CPRICPRI

Tester

WiFi AP

eNB

Page 39: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 39 of 192 Version: 1.0

consists of two Remote Radio Units (RRUs), which support two macro cells (Mc1 and Mc2), one for each area, and another pair of low power RRUs, which can be used to support small cells to offload the macro cells during peak capacity demand in each area (Sc1 and Sc2):

Figure 17 Setup for the DWDM centric demo

The data plane interconnecting the four RRUs to the three BBUs consists of DWDM transmission links interconnected by Wavelength Selective Switches (WSS) providing a dynamic wavelength routed network that supports wavelength level transport services. DWDM is a mature transport technology, which has been extensively deployed by network operators and transport providers due to its huge bandwidth capacity and protocol and bit rate independency. DWDM technology is now the prime candidate to transport CPRI traffic in FMC networks. In the demo set-up, the use of DWDM links and WSS in combination of transponders (TPs) at the edge of the network, provide the required dynamicity at the data plane. At the control plane OpenDaylight (ODL) based domain controller is implemented to control the DWDM resources in the network, while the implemented radio controller is mainly used to monitor the RAN traffic in the macro cells. To realize the required flexibility and control of the WSS and TPs in the demo, the ODL controller was extended with southbound plugins to control optical elements, with circuit switched type of services to provision wavelength level transport services and with optical Path Computation Element (PCE). The orchestrator on top of the controllers creates a global view of heterogeneous RAN and transport resources and exposes it to an applications running on top through its northbound API. The application implemented for the demo, include “Elastic

Page 40: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 40 of 192 Version: 1.0

Broadband Mobile Service” (EMBS), This application is used to validate the ability of the solution to monitor the traffic demand in the different parts of the network and dynamically allocate RAN and transport resources when a demand arises and release the allocated resources when the requested resources are not used. The RBSs and the optical platform are shown in Figure 18, while the Controller and the client laptops (UEs) that were used to validate the flexibility of the DWDM-centric solution are shown in Figure 19.

Figure 18 RBSs and optical platform used in the DWDM-centric solution

Page 41: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 41 of 192 Version: 1.0

Figure 19 the SDN controller and client laptops used to validate the DWDM-centric solution

The assumption in the mentioned demo set-up is that the traffic demand in business and the residential areas does not reach its peak at the same time, therefore a programmable network can exploit the traffic demand variation in time and location to dynamically allocate transport and RAN resources when and where they are needed.

The script of the demo is as follows:

The application starts by requesting a “default connectivity” to activate one macro cell in the business area and another macro cell in the residential area.

The orchestrator identifies the end points for the requested connections in its big switch representation and sends configuration requests to connect the identified end points the domain controllers.

The transport controllers configure the WSSs and the TPs to connect the requested RRUs and BBUs.

The two macro cells are activated. And once the macro cells are up and running, the application requests monitoring of the traffic in both the macro cells.

Page 42: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 42 of 192 Version: 1.0

To show the dynamicity of the network, we run video streaming in the UEs connected to the macro cells:

The application discovers that the traffic over one of the macro cells, e.g., the cell in the business area has reached over the specified threshold level.

The application requests the activation of a small cell in the business area.

The orchestrator identifies the end points of the requested connection and sends configuration requests to realize the connection between the identified end points.

The transport controllers configure the WSSs and the TPs to connect the requested small cell.

The small cell in the business area is activated.

To show the programmability of the network, we create high traffic demand in the residential area, e.g., by running two video streams in the UE connected to the macro cell in the residential area, and decrease the traffic demand, e.g., stop the video stream and start an audio stream in the business area. Once the traffic level in the business areas is lower than the specified threshold level and the traffic level in the macro cell in the residential area is higher than the threshold, the following actions are performed: The application requests the de-activation of a small cell in the business areas:

The message is communicated to the WSSs and TP through the orchestrator and the configured wavelength and the BBU used to support the small cell are released.

The application requests the activation of a small cell in the residential area

The orchestrator identifies the end points of the requested connection and sends configuration requests to connect the identified end points the domain controllers.

The transport controllers configure the WSSs and the TPs to connect the requested small cell

The small cell in the residential area is activated to offload the traffic in the macro cell.

3.2 Centralized/Distributed NG-POP (c/dNG-POP)

In this demo booth, two demonstrations addressing selected functions and features of the two flavours for the COMBO NG-POP (devised within WP3 T3.2 activities and reported in [4]) concept are validated, namely, the distributed and the centralized NG-POP architectures. The two macroscopic objectives addressed in this booth are:

For the distributed NG-POP: the implementation of a virtualized EPC (vEPC) instance with a full virtualization approach is validated. That is, both control and user plane entities and functions of the EPC are virtualized and instantiated in an independent virtual machine hosted within the NFV server of the UAG. The

Page 43: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 43 of 192 Version: 1.0

main objective is to show the capability of the UAG to accommodate in this case multiple instances of EPC functionalities (from different mobile network operators). In other words, the UAG can offer the appealing benefit of network sharing, i.e. physical resources (NFV sever) are partitioned and allocated to host a number of specific virtualized network functions (vEPC) which are isolated from the rest of functions.

For the centralized NG-POP: in this demonstration, the NG-POP (UAG functional equipment) is located at the core CO [1], i.e. the CO located between the metro / aggregation network segment and the core network. A vEPC instance is running on the corresponding UAG at the core CO location. Other convergent functions such as vBNG, uAUT, etc. could be also deployed on the UAG. In this showcase we exclusively focus on the vEPC. The aim is to execute a SDN orchestrator which controls and configures a multi-domain multi-technology (packet and optical switching) aggregation network used to transport mobile connections (EPS Bearers) between the RAN and the vEPC (at the UAG). Connections within the aggregation network for backhauling the mobile services are automatically and dynamically configured by the SDN orchestrator. This is triggered by a Mobile Application Client running on top of the SDN orchestrator. In other words, when a new EPS Bearer needs to be established, the vEPC communicates with the SDN controller via the Mobile Application Client to request the (bidirectional) transport connectivity between the RAN and the vEPC S-GW/P-GW network elements.

In the following, it is detailed the deployed setups for conducting the above two demonstrations along with the sequence of steps (demo script) used to perform the validation.

3.2.1 Validation of a vEPC Instance within the Distributed NG-POP

It is worth to stress that this vEPC validation within the distributed NG-POP complements those tests conducted in other booths described in sections 3.1 and 3.4. In other words, those test cases and the vEPC (herein detailed) provide part of the selected functionalities and features supported by the COMBO distributed NG-POP implementation.

As mentioned above, the main aim of the vEPC demonstration is to show the capability of the distributed NG-POP for supporting multi-tenancy of network sharing. That is, the NG-POP architecture leveraging the equipped NFV server allows instantiating virtual network functions (VNFs). Consequently, on top of a single physical host it is possible to instantiate a number of VNFs. This enables the concept of network sharing where similar network functionality is deployed in isolated VNFs for different network operators. In this particular case, the VNF is a full implementation of vEPC deployed in a single virtual machine (VM). The same could be done for other vEPCs owned by different mobile network operators.

The simplified setup used for demonstrating the vEPC deployment in the distributed NG-POP is depicted in Figure 20:

Page 44: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 44 of 192 Version: 1.0

Figure 20 Setup for the vEPC deployment in the distributed NG-POP

The involved elements (from left to right) are:

A PC labelled as Laptop1 which integrates a VLC media player acting as a video client. The Laptop1 integrates the UE (where the VLC client runs) and the RAN (i.e. the emulated eNB). The emulated RAN is provided by the NS3-based emulator developed at CTTC called as LTE/EPC LENA emulator [5].

The Laptop1, in particular the output interfaces used by the eNB are connected to a real PON infrastructure. Specifically, the emulated eNB (Laptop1) is connected a TELNET GPON ONU which is then connected to the GPON OLT through 18 km of optical fibre.

The PON infrastructure connects the emulated RAN to the UAG element (at the distributed NG-POP location) described in section 3.1. In the NFV server of the UAG, a VM is created which allows hosting deployed vEPC. The vEPC is provided by the LENA emulator where the main functions / elements are the MME and S-GW/P-GW. Both control and user plane protocols and functions are kept within the vEPC.

The Laptop2 is used to run the VLC video server which aims at mimicking the Internet remote site. The video server displays continuously a video, which the video client receives once the connectivity is created.

The script of the demo is:

1. VLC video server plays the video, but the video client does not receive the streaming since the connectivity is not created.

2. An EPS Bearer between the UE and the video server (remote host) is requested.

a. The S1-MME interface negotiates between the eNB and the MME at the vEPC the establishment of the mobile service.

UE eNBVideo client

Emulated RAN

PONCarrier Ethernet

Switch

MME S-/P-GW

VM (vEPC)

Server client

UAG (@dNG-POP)NFV server (OpenStack)

Laptop1

S1-MME

S11

S1-U

SGi

Laptop2

Page 45: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 45 of 192 Version: 1.0

b. Using the S11 interface, the MME configures the S-GW for terminating the mobile service.

c. The S1-U and SGi data interfaces between the UE and the S-GW/P-GW and the S-GW/P-GW and the remote host (video server) are configured

3. Once the connectivity between the UE and remote host has been created, the video client displays the same image being streamed by the video server.

4. Specific results of this demonstration are reported in section 9.2. Figure 21 shows the deployment of the vEPC within the context of the integrated COMBO distributed NG-POP validation. As aforementioned, the UAG NFV server is able to instantiate virtualized network functions which enable that multiple functionalities (e.g., EPC) from different mobile network operators are deployed on top of the same physical host. Hence, we can observe in that figure that two mobile core networks (EPCs) are deployed being labeled by Operator 1 and Operator 3. The latter was the EPC (control and user plane) functionalities deployed and demonstrated in this booth. In brief, this experimental demonstration presents the appealing multi-tenancy capability of the distributed NG-POP architecture:

Figure 21 Integration of the vEPC in the global COMBO distributed NG-POP validation

Last but not least, the following picture (Figure 22) depicts the physical setup deployed at Lannion final demo day to conduct the validation of the distributed NG-POP for instantiating the vEPC at the UAG NFV server.

Page 46: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 46 of 192 Version: 1.0

Figure 22 Physical setup at Lannion Demo Day of the vEPC integration within distributed NG-POP validation

3.2.2 Demonstration of the SDN Orchestrator for a Multi-Layer Aggregation Network in the Centralized NG-POP

The basic idea is that via a SDN orchestrator the control plane of the vEPC and the controllers used for the multi-domain multi-technology (packet and optical) aggregation network are coordinated to enable end-to-end mobile service creation between the emulated RAN and the vEPC. This means that whenever a new mobile service is set up, it triggers the SDN orchestrator creating a packet connection (which is tunnelled into an optical tunnel) within the aggregation network to provide the backhauling towards the vEPC. Recall that in this demonstration it is considered that the UAG is bordering the aggregation and the core network segments, i.e. dealing with the centralized NG-POP approach.

The setup deployed for this demonstration is shown in Figure 23:

Laptop1: UE (Video Client) and Emulated RAN (LTE eNodeB)

Laptop2: Video Server (emulates Internet Access)

GPON ONU: enabling access to the UAG (i.e., vEPC)

Page 47: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 47 of 192 Version: 1.0

Figure 23 Setup for the SDN orchestrator coordinating aggregation network and vEPC control at the UAG located in the centralized NG-POP

All the element forming this setup are located physical at CTTC labs at Barcelona (Spain). The main components are:

A video client, UE and eNB are running in a PC/server. As above, in this server there is emulated RAN provided by LTE/EPC LENA emulator.

The MLN (multi-layer network) aggregation is a real infrastructure which combines different packet and optical switching domains. It is deployed and placed within the CTTC ADRENALINE testbed where MPLS and WDM technologies are used for packet and optical switching, respectively. The objective is to use MPLS to transport mobile services from the RAN towards the EPC. The optical tunnels enable the aggregation of a number of MPLS connections to be more efficiently transported between the endpoints (i.e., RAN and mobile core / EPC). Further details of the multi-layer aggregation network is provided in section 9.6.

The SDN orchestrator is the responsible to automatically configure the different packet and optical domains used to provide the backhauling between the RAN and the vEPC located at the UAG. The implementation of the SDN orchestrator follows the architecture described in the IETF Application Based Network Orchestrator (ABNO) [5]. The orchestrator provides two main interfaces which currently are defined under the ONF SDN architecture [6]:

o Application Control Plane Interface (A-CPI) is also commonly referred to as NorthBound Interface and enables the communication between applications and the SDN controller/orchestrator. In other words, this interface is the driver for the programmability capability of SDN networks. In this demonstration, observe in Figure 23 that A-CPI is between the MME EPC and the SDN orchestrator (implemented via a REST API) to request the transport of mobile services being created. Details of the implementation of such interface are provided into section 9.6.3.

UE eNBVideo client

Emulated RAN

MLN Aggregation

SDN Orchestrator (ABNO)

OFP PCEP

MME S-/P-GW

EPC

UAG (cNG-POP)

Server client

REST API

S1-MME

POST {A, Z, traffic parms (bw,

latency), eNB, S/P-

Gw, TEIDs}

A ZS11

SGi

Page 48: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 48 of 192 Version: 1.0

o Data Control Plane Interface (D-CPI) is also known as the SouthBound Interface and provides the configuration of the underlying network equipment (e.g., packet and optical switches). In the demonstration we use two different APIs: OpenFlow protocol used to configure the rules and actions for the forwarding of the packet domains and the Path Computation Element Protocol (PCEP) [7] for instantiating the establishment of the optical tunnels via a distributed GMPLS signalling [8].

The mobile core is implemented via the emulated vEPC provided by LENA emulator (likewise the previous demonstration). Such vEPC runs on top of a VM within the UAG placed in the centralized NG-POP location. The EPC MME and SGw/PGw are deployed into the vEPC. For the sake of completeness, the MME procedures are enhanced in order to communicate with the SDN controller via the A-CPI interface.

The Internet connectivity provided by the EPC is “emulated” by means of the VLC video server which is used to actually validate the whole setup and operations to be performed.

The sequence of steps performed in this demonstration is:

1. The video client is shown but no data (video streaming) is received since the connection between the UE and the Video server is not actually requested. Meanwhile, the video server is continuously transmitting.

2. The connection between the UE and the video server is triggered. This implies a set of sequenced actions automatically executed:

a. The eNodeB, where the UE is attached to, requests a mobile Bearer establishment (via S1-MME) to the vEPC MME. These control plane messages within this demo setup are delivered through an out-of-band channel. That is, the S1-MME interface is not transported over the MLN aggregation links and nodes.

b. The MME coordinates the establishment of the user plane interfaces (i.e., S1-U and SGi) with both the eNodeB and S-GW/P-GW. This is done using S1-MME and S11 control interfaces.

c. Once at the EPC level, the Bearer establishment identifying the Tunnel Endpoints Identifiers (TEIDs) at the two endpoints (eNodeB and S-GW) is completed, the real data traffic between the RAN and the EPC (and in the opposite direction) needs to be provided. This means that the eNodeB and the S-GW are connected via the MLN aggregation network. As anticipated, the EPC MME sends the request (REST API) to the SDN orchestrator to configure underlying transport network. Specific details of this step are addressed in section 9.6.3.

3. The SDN orchestrator configures the packet and optical domains to transport the mobile service being set up. This involves:

Page 49: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 49 of 192 Version: 1.0

a. At the ingress and egress packet nodes of the domains connected to the RAN and the UAG, it is needed to perform (via OpenFlow protocol) the push and pop of the MPLS tags. This means that for the incoming and outgoing GTP-U frames, the selected MPLS tag is added and stripped, respectively. See section 9.6.2.

b. The MPLS packets are tunnelled over optical connections within the WDM domain. This requires that at the edges of the WDM network, the optical transceivers are configured selecting the WDM nominal frequency.

4. Once the MLN aggregation network is configured the end-to-end connectivity is established and this the video client receives the video streamed by the server.

For the sake of clarification this demonstration was conducted remotely from the CTTC premises at Barcelona (Spain). The reason of doing this is that the used aggregation network is based on the packet-optical infrastructure available at CTTC labs which cannot be physically moved but remotely reached (see Figure 24).

Figure 24 View of the CTTC Testbed (remotely reached from Lannion) to conduct the centralized NG-POP validation

9

WDM nodes + AS PCE

MPLS nodes

- Emulated eNB+ Video Client

- EPC + Video Server

SDN Orchestrator

SDN OFP Packet Controllers

Page 50: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 50 of 192 Version: 1.0

3.3 Universal Authentication and Data Path Management (uAUT & uDPM)

This demonstration validates the universal Authentication (uAUT) and the universal Data Path Management (uDPM) entities in the COMBO functional convergence architecture.

Regarding the uAUT, this booth validates one of the implementations that have been proposed by COMBO for this entity. The next subsection describes in detail this part of the demonstration.

Regarding the uDPM, this booth focuses on the Multi-Paht Entity (MPE). In a nutshell, this entity allows merging several data flows into a single one. Section 3.3.2 describes this entity and its validation in detail.

3.3.1 Universal Authentication Demonstration

The uAUT is one of the main entities of the Universal Access Gateway. As it has been described in [9], it is responsible for receiving and processing all the authentication queries from the different accesses of the convergent network. Figure 25 below shows the architecture of the uAUT as it has been defined in COMBO:

Figure 25 Architecture of the uAUT

Page 51: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 51 of 192 Version: 1.0

Taking into account the architecture defined in section 2.1 in D3.5 [4], for the demonstration in Lannion in April 2016 we developed and deployed some entities in the NFV server that run the UAG. The deployment is shown in Figure 26 below:

Figure 26 Hardware setup for the uAUT

In Figure 26, starting from right to left, the first device that can be seen is a piece of user equipment. During the demonstration we used a mobile phone with Android 6 (Motorola Nexus 6).

On the top part of Figure 26 Remote Radio Unit and eNodeB can be seen. These two equipment provides access to the mobile network. In the lower part of Figure 26 a Wi-Fi access point can be found. This access point provides access to the Wi-Fi network.

Following Figure 26 to the left, the next entities that can be seen are WR-WDM-PON System and WS-WDM-PON System. Both of them are part of the backhaul of the network and they are in charge of giving optical access to the UAG. In a nutshell, these systems are composed of an Optical Network Unit (ONU) that is in the small / macro cell side (the access points are connected to it in order to get access to the network) and an Optical Line Terminal (OLT) in the Central Office (CO). The Lannion fibre ring is between the OLT and the ONU and simulates the distance between the cell that provides service to the final users and the CO. These systems are explained in detail in section 3.1.

The next piece of equipment that can be seen is the Carrier Ethernet Switch for the interconnection to the rest of the elements.

Hereafter, we can see the NFV server. This server runs the functions that compose the UAG. In this section we are going to explain the most important ones for this

Page 52: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 52 of 192 Version: 1.0

demonstration. First of all, the virtual switch gives connectivity to the different virtual functions. The Evolved Packet Core (EPC) is part of the mobile network. For this demonstration, it has been virtualized and integrated in the UAG. The uAUT entity is the Universal Authentication entity that is in charge of the authentications of the access networks. As this demonstration is focused on it, it is explained later. The Wi-Fi GW module is just the gateway of the Wi-Fi network. Finally, the Common GW is the gateway that provides access to the internet to all the virtualized functions of the server.

The last entity that we can find in Figure 26 is the Radius Server. During the demonstration in Lannion, this server ran in Spain. It is in charge of doing the authentications of the users in the Wi-Fi network. It simulates the Radius server of a Wi-Fi provider that has an agreement with the UAG owner.

Figure 27 shows the eNodeB, the Remote Radio Unit and the Wi-Fi Access Point used during the demonstration in Lannion for the demonstration of the uAUT.

Figure 27 eNodeB, Remote Radio Unit, and Wi-Fi AP in the demonstration

After explaining all the hardware setup deployed in Lannion, it is important to focus on the functionality of the uAUT in this deployment as it is one of the most important entities in this demonstration. We included Figure 28 in order to explain its functionality:

Page 53: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 53 of 192 Version: 1.0

Figure 28 Logical connection of the uAUT with the rest of the deployment

Figure 28 shows the logical connection between the Wi-Fi access point and the Universal Authentication deployed in Lannion during the authentication phase. The Wi-Fi AP sends the authentication requests to the uAUT. This entity redirects them to the correct Radius server. During the demonstration made in Lannion, the authentication server was running in Spain, in FON’s premises. This server could be anywhere. Moreover, we could have more than one server. For example, in a real deployment we can assume a scenario where the operator that owns the UAG has agreements with more than one Wi-Fi provider. All the access points would be connected to the same UAG and to the same uAUT and the uAUT would be in charge to redirect the authentication requests to the correct authentication server. It acts as a proxy.

Once the deployment has been clarified and the new entities have been explained, the next step is to understand what a universal authentication is. A universal authentication has two features:

It is made with the same credentials in all the networks (mobile and Wi-Fi). We use the credentials in the SIM card.

It is made without the intervention of the user. It is a seamless authentication.

In order to use the same credentials to authenticate the user in both networks we can use different protocols: EAP-SIM, EAP-AKA or EAP-AKA’. Using one or another depends on the one that is supported by the SIM card. During Lannion demonstration we used EAP-AKA.

On the other hand, in order to implement the seamless authentication in the Wi-Fi network we use Hotspot 2.0 technology, as explained in previous deliverables.

The showcase of this booth starts with the authentication of the UE (mobile phone) in the mobile network. After that, we start a Skype call with another device in the room. This shows that the connection is working on real time. Once the call is started, we enable the Wi-Fi interface in the mobile phone. When the mobile phone detects the Wi-Fi access point, thanks to Hotspot 2.0, connects to the Wi-Fi network with the credentials of the SIM card. The Skype call continues through the Wi-Fi interface as

Page 54: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 54 of 192 Version: 1.0

the mobile interface is switched off automatically by the operating system. After a few seconds we disable the Wi-Fi interface. The Skype call continues through the mobile interface. Once we show that the Skype call is active, we enable the plan mode. The Skype call stops as there are no active interfaces. With this showcase we want to show that the UE connects to both networks and that the links are active.

Figure 29 summarizes how the different entities are involved in this demonstration. The architecture shown is the one that was deployed during the demonstration in Lannion. The blue line shows how the data flows for the Wi-Fi network. Moreover, the dotted blue line shows the control data flow for the Wi-Fi network. On the other hand, the pink line shows the data flow for the mobile network:

Figure 29 Logical connection of the uAUT with the rest of the deployment

3.3.2 Multi-Path Entity Demonstration

A mobile phone today has several interfaces for connection to different networks, but they are only used one at a time. Furthermore, the decision for which connection to use is done at the UE, according to user settings. Hence, the traffic is optimised from a user perspective. Most often this does not give the best load balancing viewed from a network perspective. If the balancing instead is done by a decision engine in the UAG, as described in Deliverable 3.5, Section 2.1, the utilisation of the infrastructure can be much improved.

In this setup the UE is connected to the UAG via two separate paths, one over LTE and one over Wi-Fi. It makes requests to either one of two content servers, one

Page 55: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 55 of 192 Version: 1.0

standalone laptop and one VNF. Each request is a TCP session and all packets in the same session use the same path. That is, as the session request is forwarded through the network all the remaining packets, in both ways, use the same path. For a network load balancing this is a reasonable balancing situation, and the traffic is in average loaded according to the settings. To simulate the network behaviour, we used one UE and many short sessions, instead of several UEs with long sessions.

The UE is a standard Linux laptop with a USB LTE modem, Huawei E3372, connected to the virtual EPC described in Section 3.3.1. As common for mobile modems, the USB stick is implemented with NAT functionality, giving a local address 192.168.8.100 and the default gateway 192.168.8.1. At the UE laptop, it connects to the interface eth4. The global address assigned by the EPC is 10.100.110.103. The Wi-Fi connection is established through the built in Wi-Fi in the laptop. It connects to the AP described in Section 3.3.1 through the Wi-Fi gateway in the UAG. At the UE, the interface is wlan0 and the address 10.100.120.24, with default gateway 10.100.120.1; see Figure 30.

Figure 30 The setup for multi-path management.

When the UE initiates a session, e.g. a TCP request to a server on the Internet, one of the two paths is chosen. At the UAG, the packet is routed by the Common GW to the Multi-Path Entity (MPE), which has the IP address 172.18.0.2. The forwarding rule in the Common GW is that all packets which originate from the UE, that is with either source address 10.100.110.103 or source address 10.100.120.24, are forwarded to the MPE. The source address is then replaced by a NAT function to 172.18.0.2, and again sent to the Common GW. Since now, the source is not the UE, the packet can be routed towards the Internet and the destination.

The request arriving at the Content Server (CS) is replied to the MPE, 172.18.0.2. Thus, at arrival to the Common GW, it is forwarded to the MPE. In Linux both the routing and the firewall are implemented stateful, i.e. all packets in the same session are routed the same path. Hence, when the NAT function, which is part of the firewall, translates the incoming source address, it saves an entry in the cache. When the reply comes back, it looks up the entry and change the destination address to the right of the two UE addresses. Then it can be passed back to the Common GW, which forwards it to the UE. Similarly, for the remaining packets from the UE, the routing

Page 56: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 56 of 192 Version: 1.0

cache sets the same source interface and address. Thus, all packets in the session have the same route between the UE and the destination.

This means the path selection for a session is done at the UE. However, contrary to state of the art today, the paths can be used simultaneously. To manage the settings for the balancing from the UAG decision engine, there is a need for a management path from the UAG to the UE. In this setup we have chosen to use the Wi-Fi path and addressing using the TCP port 4444. The LTE path cannot be used since the session is addressed from the outside and the USB modem contains a NAT, which does not forward the packet. In a commercial implementation it should be able to use both paths, and more thoroughly separated from the data plane traffic, e.g. using VLAN.

In this setup, we have concentrated on the multi-path connectivity and not on the decision engine. Hence, instead of a decision engine we use a GUI at the MPE that lets us set the balance for the load over the paths, as seen in Figure 31. The left window is the GUI at the MPE, and the right window is the management window at the UE. When the balancing, e.g. 1/3 over LTE and 2/3 over Wi-Fi is set, the command ‘Set 1 2’ is transmitted to the UE, where the router settings is altered accordingly:

Figure 31 Management window at UAG (left) and the UE (right)

To accomplish the multi-path selection at the UE, the router settings utilise a load balancing on the default gateway. This also sets the source address as the corresponding IP address at the UE. Figure 32 shows the initial router settings. First, the current default gateway is deleted and then a balancing over the two connections is established. At the initialisation shown in Figure 32, an equal balancing is set:

ip route del default

ip route add default nexthop via 192.168.8.1 dev eth4 weight 1 nexthop via 10.100.120.24 dev wlan0 weight 1

ip rule add from 192.168.8.100 table 1

ip rule add from 10.100.120.24 table 2

ip route add 192.168.8.0/24 dev eth4 scope link table 1

ip route add default via 192.168.8.1 dev eth4 table 1

Page 57: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 57 of 192 Version: 1.0

ip route add 10.100.120.0/24 dev wlan0 scope link table 2

ip route add default via 10.100.120.1 dev wlan0 table 2

Figure 32 Router settings at the UE

Apart from the router settings controlling the output, the input must also be controlled. Normally there is a spoofing filter in the router setup that discards any receiving package that do not have a well define return address. In order to guarantee such return path, policy routing rules have been added. In essence, if the source address for an outgoing packet is 192.168.8.100, then the routing table 1, with default gateway 192.168.8.1, is chosen, and if the source is 10.10.120.24, then the routing table 2, with default gateway 10.100.120.1, is chosen. With this, the management path from the UAG to the UE for altering the router settings also has a well-defined return path.

As described earlier, packets stemming from the UE get the source address changed to the UAG address 172.18.0.2. This is accomplished by the firewall commands shown in Figure 33:

iptables -t nat -A POSTROUTING -s 10.100.120.24 -j SNAT --to-source 172.18.0.2

iptables -t nat -A POSTROUTING -s 10.100.110.103 -j SNAT --to-source 172.18.0.2

Figure 33 Set address translation (NAT) for sessions initiated at the UE

The above settings provide a centralised control of traffic balancing over the established connections between the UE and the UAG for sessions initiated from the UE. In most applications, the sessions are initiated from the UE, but there are also examples when it is initiated from outside, e.g. if the UE has files that should be reachable or if it has a web server running. In those cases, the firewall in the UAG can use balancing in the NAT system by the commands in Figure 34. However, since the implementation of the LTE USB modem contains a NAT itself, without possibilities of port forwarding it is not possible to initiate sessions over this path to the UE. Hence, in the demo setup this step was omitted:

iptables -t nat -A PREROUTING -d 172.18.0.2 -m statistic --mode random --probability 0.5000 -j DNAT --to-destination 10.100.110.103

iptables -t nat -A PREROUTING -d 172.18.0.2 -j DNAT --to-destination 10.1001.20.24

Figure 34 Firewall settings in the UAG to allow balancing of sessions initiated from the outside to the UE.

To demonstrate the load balancing of calls from the UE, two different content servers were used, one standalone with address 10.1.10.62, and one implemented as a VNF in the UAG with address 10.10.10.6. In the description below the virtual contents server is used.

At the content server a request application is launched, see Figure 35. This receives TCP requests in form of a text messages, and returns new text messages. At the UE

Page 58: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 58 of 192 Version: 1.0

the transmit application in Figure 36 sends a sequence of TCP requests to the content server. In the example in Figure 36, requests are sent periodically with a gap of 100 ms, meaning that about 10 times a second, a TCP session is established and a message exchanged with the content server:

Figure 35 The request application at the content server

Figure 36 Transmit application at the UE, sending a sequence of requests to the content server

To see how the traffic is balanced over the two paths, the python script Speedometer is used at the UE, as shown in Figure 37. Here the total traffic is shown per interface. The upper two plots are for the LTE connection and the two lower for the Wi-Fi connection. Similarly, the two left most plots are the uplink traffic, while the two right most show the down link traffic. The settings for the system in the figure is according to Figure 37, i.e. 1/3 of the traffic should go over the LTE connection and 2/3 over the

Page 59: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 59 of 192 Version: 1.0

Wi-Fi connection. At the demonstration setup, the settings were altered at the MPE management GUI and it was seen that the traffic balancing changed accordingly:

Figure 37 Traffic monitoring for the test setup

In Figure 38 the setup in the UAG is summarized. The two paths between the uDPM-MPE unit in the UAG and the UE represents the LTH and Wi-Fi paths. Similarly, the paths to the right of the uDPM-MPE represents the connections to the two CS.

Page 60: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 60 of 192 Version: 1.0

Figure 38 Setup for the MPE demonstration.

3.4 Universal Data Path Management (uDPM) & Caching

In this booth two functional demonstrations are presented:

Network controlled offload and load balancing with smooth handover between LTE and Wi-Fi networks, and

In-network content caching and content prefetching initiated by decision engine.

The Decision Engine (DE) is part of the uDPM functional block of the UAG. In these demonstrations, it is applied to manage Wi-Fi offloading by controlling interface selection and to assist with content caching decisions.

In the caching test case, we show the caches are enabled in the network, and content distribution is managed by the Cache Controller (CC), co-located with UAG. CC interacts with DE in order to enhance the caching/prefetching efficiency.

3.4.1 Network Controlled Offload and Smooth Handover

This demonstration showcases the capability of the DE to perform load balancing based on the current load and availability of multiple access networks with the objective of pushing traffic away from LTE towards Wi-Fi networks.

The demo setup consists of the following components:

Page 61: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 61 of 192 Version: 1.0

Two UEs, both laptops with an LTE and a Wi-Fi interface. Both run an Multipath TCP (MPTCP) enabled Linux kernel.

Two Wi-Fi networks; both consist of a single AP and a Wi-Fi gateway (provide DHCP server and IP masquerading) hosted on the NFV server.

An LTE network consisting of a single eNodeB and a virtualized EPC hosted on the NFV server.

The DE, hosted at the NFV server.

An MPTCP enabled Content Server, also hosted at the NFV server.

Figure 39 highlights these elements inside the full integrated demo setup. The APs and the eNodeB are connected using Ethernet to WS-WDM-PON and WR-WDM-PON ONUs. These ONUs are the user side equipment of PONs. These provide connectivity to the NG-PoP location where the NFV server resides. The two UEs coloured yellow are able to connect to any of the Wi-Fi networks or the LTE network. The dashed lines represent control connections, while the continuous lines data connections.

Figure 39 Components involved in the Network Controlled Offload demonstration

The UEs run a client application that allows their interface selection to be remotely controlled by the DE (see Figure 40). The interface to be used and the network to connect to on that interface, is decided by the DE:

Page 62: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 62 of 192 Version: 1.0

Figure 40 Interface selection client application

Both, the UEs and the Content Server, support MPTCP, which is an extension of regular TCP. MPTCP is able to spread data transmission over multiple interfaces using subflows. A subflow looks like a regular TCP flow whose segments carry a new TCP option type. While the UEs switch between access networks, the use of MPTCP mitigates connection interruption. If both LTE and Wi-Fi coverage is available and the DE selects e.g. a Wi-Fi network to be used by the UE, then the UE stays connected to the LTE network as well, but it sets the LTE interface to be a backup path for MPTCP. This means that MPTCP has a subflow established over this interface as well, but it does not use it to transmit data. In other words, all TCP connections have two subflows established, one over the LTE interface, and one over the Wi-Fi interface. As long as one of the interfaces stays connected session, continuity is ensured.

The DE monitors the availability of the three access networks and their utilization. It periodically re-computes the UE to access network assignments. If these change compared to the previous computation, the UEs are notified to perform the change. The objective of the DE is to balance the load on the access networks. Moreover, it is configured to select Wi-Fi networks over LTE ones (offloading). If the total utilization is low, then its aim is to consolidate the assignments by moving the UEs to the fewest number of access networks necessary, and leaving others empty.

The script of the demo is the following:

Both UE 1 and UE 2 are connected to the same Wi-Fi network.

UE 1 starts to stream a video from the Content Server over HTTP. The buffer size is set to be very low (1-2s) in order to be able to see the effect of network connectivity interruptions.

A bit later, UE 2 starts to download a large file from the Content Server in order to saturate the Wi-Fi link.

The DE notices that there is a bottleneck and decides to move UE 1 to a different access network (either the 2nd Wi-Fi network or LTE). For further details, please see the test case descriptions (Section 9.4).

Page 63: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 63 of 192 Version: 1.0

3.4.2 In-Network Content Caching

The first test case of content caching is designed to show how the caching works.

Figure 41 Caching/prefetching test case deployment

As shown in Figure 41, the cache is enabled in the caching AP, called Cache AP in the diagram, which is a wireless AP with routing and caching functionality. Caching is also enabled near to UAG inside the NG-PoP. The Cache Controller running in NG-PoP as a VM is used to configure and manage the caches distributed in different parts of the network. The whole architecture of the implemented caching system is shown in Figure 42:

Page 64: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 64 of 192 Version: 1.0

Figure 42 COMBO caching/prefetching system architecture

The cache control and management operated by COMBO Cache Controller (CC) is based on Network Configuration Protocol (NETCONF). NETCONF protocol provides mechanisms to install, update, and delete the configuration of network devices, such as routers, switches, and firewalls. It uses Extensible Markup Language (XML)-based or JavaScript Object Notation (JSON) data encoding for the configuration data and as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs). Netopeer is used in this demo. Netopeer is a set of NETCONF tools built on the libnetconf library. It allows operators and developers to connect to their NETCONF-enabled devices and control their devices via NETCONF. In our system, Netopeer client is running on CC, and netopeer server is running on each CacheAP.

The overall software system of COMBO cache test case is composed of three parts as shown in Figure 42:

Local caching Management Primitive (LMP) on CacheAP: manages the behaviour of local cache node.

Netopeer server on CacheAP: manages the configuration of the LMP.

Cache Controller that includes a Cache Controller Daemon (CCD), a DataBase (DB) and a netopeer client to manage and control all the caches distributed in the network, where:

o Netopeer client allows allow CC managing CN by Netconf protocol.

Page 65: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 65 of 192 Version: 1.0

o CCD daemon is responsible to communicate with Netconf and external modules by an automatic way. CCD is able to configure CN and send controlling messages to LMP automatically.

DB stores the full information of CNs, user requested content and the content cached in LMP on each CN

The basic function of the CC is to split the management of the content flow and the placement from the management of content delivery infrastructures (e.g., forwarding and caching equipment). By this way, content service providers can concentrate on improving their service quality while letting network operators take care of the configuration of the underlying network. On the other side, network operators can provide their content delivery infrastructures and the configuration interface to content service providers as a service in order to increase operators’ revenue.

3.4.3 Content Prefetching Initiated by DE

In Figure 42, we further show the collaboration between uDPM and COMBO content delivery service, mainly presented by the communication between DE and CC. CC must be able to receive certain information about the user profile and network performance from DE to be able to make an optimized cache decision. On the other side, CC also must provide the information of caching decision to DE that can take into account the content location to make an interface selection decision for an end user. When DE detects a handover of a user from one network interface to another, the CCD is informed to perform a prefetch operation on CacheAP that the user will switch to. The main procedure is as follows:

CCD receives a handover signal for a user like (userID, userIP, fromCacheAPIP, toCacheAPIP) which indicates that a user having a unique userID and an IP address of userIP will switch from a CacheAP with an IP address of fromCacheAPIP to another CacheAP with an IP address of toCacheAPIP.

CCD searches DB to find the current playing URL for the user.

CCD asks the CacheAP with the toCacheAPIP address to prefetch the following URLs for the user.

The DE provides the client information including: IP_client, NETMASK and IP_Gateway. The prefetch steps are listed as follows:

UE has an issue of network connection.

DE detects or orders a UE to make a network interface handover.

DE informs the handover to CC about the switch from / to which CacheAP.

CC check DB to find the connected CacheAP and the requested URLs by UE.

CC creates a Prefetch xml file with the played chunk and chunks that will be played.

Page 66: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 66 of 192 Version: 1.0

CC asks the CacheAP that UE will switch to, to prefetch the requested video URLs by UE.

When UE swithes to the targeted CacheAP, the video chunks have already been cached in the CacheAP, and the video playback is able to be played continuously.

Page 67: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 67 of 192 Version: 1.0

4 Final Demonstration in Lannion

The purpose of the so-called final demonstration was to conduct jointly a set of experiments covering a number of network functions deployed within the UAG’s NFV server. For this purpose, a set of integrated experiments is performed on the distributed COMBO NG-POP scenario and presented in this section.

One of the main goals of this multi-partner activity was to actually verify the support of different VNFs instantiated in the NFV server where the (control and data) flows from the access devices are transported over the WDM-PON approaches towards the UAG dealing with selected COMBO FMC objectives. The set of functions and features considered:

A pair of VNFs for two separated vEPC (referred to as vEPC1 and vEPC2).

The uAUT function enabling the seamless authentication of an UE connected to either mobile or Wi-Fi access.

The uDPM DE function used to instruct an UE selecting an access network (e.g., Wi-Fi) prior to another access option (e.g., mobile).

Efficient caching decisions coordinated by the uDPM DE.

The above functionalities were validated “in-situ” during the Lannion final demonstration day and are reported in the following. The collected numerical results consist on providing basic throughput and latency measurements of the different deployed functions being measured at both endpoints: UE and the UAG’s Common Gateway.

4.1 Demonstration Setup Description

A seamless synchronization between the cloud environment and physical infrastructure is achieved and presented in the network diagram (Figure 43). The carrier Ethernet switch depicted (ADVA FSP 150EG-X) acts as a VLAN cross-connect, isolating access connection types into VLANs and tagging them with separated IDs ranging between 110 through 160. The interconnection channel supporting network function chaining is handled in VLAN 1000, whereas service VLAN 500 associates the used validation applications (e.g., video, content servers):

Page 68: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 68 of 192 Version: 1.0

Figure 43 Experimental validation networking setup

The OpenStack platform was selected to deploy the UAG ‘s NFV server due to the on-demand resource deployment and configuration service features. The support for allocating various computing and networking resources for each use case/test hosted, and isolating them into separate projects made it ideal for our planned setup. Maintaining the VLAN setup or addressing continuity from the physical infrastructure inside the NFV server was accomplished by configuring OpenStack NFV sever (i.e. VMs) to have access to the provider network through Open vSwitch – OVS deployment. For the sake of completeness, a 10G optical line card was set up to connect to the carrier Ethernet switch and the NFV server to effectively handle the user data plane traffic for all targeted use cases.

4.2 Experimental Validation and Results

In this section, we briefly summarize the set of tests that were conducted for validating

specific FMC COMBO objectives. In the first test, two independent instances of the

mobile core (referred to as vEPC1 and vEPC2) are deployed as VNFs into the NFV

server. The purpose of doing this is to outline the capability of the UAG architecture to

provide appealing support of multi-operator network functions. In other words, the

designed UAG architecture enables exploiting the virtualization of network functions

into the NFV server. This allows deploying similar but isolated network functions

(operated by different network operators) on top of the same physical infrastructure.

This feature addressed in WP3 T32 [4] paves the way for deploying interesting network

sharing / slicing strategies on the COMBO’s UAG architecture. For the test, we

consider the LTE/EPC S1 interface (including both control and data) traffic which is

Page 69: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 69 of 192 Version: 1.0

backhauled from LTE eNBs over the WDM-PON infrastructure towards the

corresponding vEPCs (via the assigned VLANs 110 and 150).

Another (and different) access network used for the joint demonstration is a Wi-Fi AP.

Such a device is connected through the same WDM-PON access infrastructure to the

NFV server using the VLAN- ID 120:

Figure 44 LTE attach procedure (Wireshark capture trace)

The purpose of using the Wi-FI AP is to show the authentication process operates

seamlessly for both Wi-FI and one of the LTE/EPC used in the previous test. To that

end, the same SIM credentials are used to transparently and seamlessly (as

commented) authenticate both access technologies (i.e., mobile and Wi-Fi). In this test

we capture as shown in Figure 44, the entire LTE user attach procedure. The whole

process measured, on average, takes 650 ms which includes the user authentication

phase via the uAUT VNF that took 279ms. When a Wi-Fi switchover is performed, the

Wi-Fi AP sends the connection request to the uAUT server which processes the

request. The uAUT accepts the connection based on the retrieved credentials from the

HSS component of the vEPC. In this case, the authentication occurred over Wi-Fi in

10 ms.

The third test case reports the offloading and handover process enabling a UE to

efficiently use the network resources. A UE, running an interface selection client, is

connected simultaneously to both a LTE access and one of two Wi-Fi networks. An

uDPM DE (see Figure 43) VNF is deployed on the NFV server. Using a custom API,

the uDPM DE provides information to the UE regarding the access method selection.

For the sake of clarification, the network via the UAG’s uDPM DE is the responsible to

instruct (e.g., based on the network state) the UE selecting an interface (i.e., access

technology) before other interface option. In this context, a set of feasible scenarios is

executed outlining the automatic and seamless handover process, e.g., from LTE to

Wi-Fi, from Wi-Fi1 to LTE to Wi-Fi2. Such handover examples are achieved without

causing service interruption which is indeed ensured by the use of MPTCP function in

the NFV Server:

Page 70: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 70 of 192 Version: 1.0

Table 3 Client performance test results

Throughput (Mb/s) Latency (ms)

Uplink Down

link

Min. Avg. Max.

LTE1 43.5 45.4 16.94 18.01 21.80

LTE2 26.4 55.1 40.79 53.92 68.76

Wi-Fi 63.5 72.1 1.72 2.382 3.19

Fixed

line

676 781 0 0 1

For the caching (fourth test), an SDN-based Cache Controller VNF is instantiated on

the NFV server. It decides where (e.g., at either access network device – CacheAP- or

the dNG-POP) to cache / prefetch the content. The Cache Controller and UAG’s uDPM

DE are coordinated to instruct any UE to connect to a different CacheAP as long as

the QoS is degraded due to congestion in the CacheAP node or mobility reasons.

Likewise, the previous tests, MPTCP is adopted to prevent connection interruptions.

Two figures of merit for these above tests were collected: throughput (in both downlink and uplink) and latency at both the UE and the common gateway of the NFV server (see Figure 4). The results are shown in Table 3. In light of the above detailed joint demonstrations, we demonstrate one of the main goals of the COMBO UAG and NG-POP concepts: the feature of having an (network entity) architecture where multiple and heterogeneous network functions (control and data planes) are hosted in for the sake of attaining feasible FMC. In this regard, the UAG architecture is capable of converging (i.e., terminating and processing by means of generic functions) flows originated in a number of different access technologies such as fixed, mobile and Wi-Fi.

Page 71: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 71 of 192 Version: 1.0

5 Results, Lessons Learned, and Recommendations from Integrated Demonstration

As an overall and quick project result in light of the experience gained in the final demonstration conducted in Lannion (April 28th) was that: those selected and planned functional and structural convergent solutions devised in COMBO, and reported through sections 3 and 4 were experimentally validated. WP6 activities aimed at developing and demonstrating architectural COMBO solutions for the structural and functional convergence. Such a macroscopic WP6 objective is accomplished within an integrated network scenario available in Lannion, including a deployed fibre ring. Thus, one may state that after demonstrating the planed FMC test cases, COMBO proposed solutions are feasible and candidate to be rolled out in a network operator infrastructure with the purpose of converging its network functions and infrastructure to support multiple access networks, services and technologies.

The above is a summarized and unified conclusion that we achieve taking into account the results from all the individual and integrated test cases. Nevertheless, the conducted demonstrations are disparate addressing different COMBO architectures and implementations, e.g.; WS/WR WDM-PON, WDM-centric, distributed NG-POP, centralized NG-POP, uAUT, uDPM, etc. In the following, we provide the key results along with lessons learned and recommendations after integrating listed COMBO objectives in the demonstration setups.

WS-WDM-PON

The obtained results of the evaluation of the WS-WDM-PON system show that it is a very interesting convergent access and aggregation architecture candidate for the realization of the transport network of the distributed NG-POP concept in brownfield fibre deployments based on power splitters, especially in urban areas. The proof-of-concept demonstration of the system corroborates it is compatible with current GPON deployments. Thus, it is a very smart solution for bringing fixed and mobile access to residential customers, using GPON technology together with WDM-based mobile backhaul and fronthaul services to operators using the same access network by sharing the infrastructure and reducing CapEx and OpEx. Although it requires amplification at the OLT for fibre distances of about 20 km, mainly due to the high insertion loss of power splitters, WS-WDM-PON may target in the future even larger distances of the order of 40-50 km. With regards to the control plane that has been implemented, as it uses a dedicated channel in its out-of-band procedure to be able to assign available channels to each client, this is found to be improved in the future by using SFP+ modules at the ONU with Auxiliary management and control channel (AMCC) capabilities, as also recommended for NG-PON2. With regards to CPRI fronthaul support, it is important to highlight that the implemented control plane uses control frames embedded in the 10G Ethernet frames in the IBM phase, thus it is more suitable for pure Ethernet traffic and not for fronthaul based on CPRI. Of course, fronthaul needs Link Quality Monitoring techniques which are not currently supported by the developed control plane. However, this is only true for the case of a fronthaul

Page 72: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 72 of 192 Version: 1.0

link based on CPRI. In case of other fronthaul protocol based on Ethernet, the developed control plane could support this with any problem. This is of extremely importance for the future 5G network, as CPRI presents important scalability issues to support 5G evolution. Using Ethernet in the 5G fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimized performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. In this light, novel Ethernet based protocols are being proposed, such as CPRI over Ethernet (IEEE P802.1CM), or even new fronthaul functional splits that could alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains.

WR-WDM-PON

Although the WR-WDM-PON based on tunable laser and centralised wavelength locker showed the feasibility in the field demonstration, however, bringing down the cost of tunable laser diodes will still be the major future challenge. The implementation of centralized wavelength locker in the WR-WDM-PON is one of the key step to significantly reduce the device cost, as a dedicated wavelength locker for each laser can be spared. Additionally, there are possibilities to further reduce the cost: optical parameters and packaging specifications could be relaxed, as well as reducing the effort on laser calibration. So far the low-cost tunable laser is not really commercially available for the access or fronthaul applications on a wide scale.

The wavelength tuning speed of the ONU laser could be also improved in the next prototype. Although the specification of tuning speed is still under discussion in the standardization group, some approaches could be considered to increase the tuning speed, for example, instead of a full C-band dedicated for the upstream channels, only half C-band could be used (another half band for the downstream channels), such that the ONU laser does not need to sweep the entire C-band during the initialisation, effectively doubling the tuning speed. On the other hand, this could also ease the design constraint of the laser diode. Moreover, since currently there is only a single communication channel broadcasted to all the ONUs for the wavelength control, more activated ONUs will therefore lead to a longer initialisation period. A possible solution to it will be that each wavelength channel could have an individual communication channel, where the wavelength of each ONU laser will be monitored and controlled in parallel. As a result, the tuning speed of the ONU laser will be no longer dependent on the number of ONUs.

DWDM-Centric Solution

During the course of the project, it has become clear that future converged networks will need to support not only new applications with stringent bandwidth, delay, availability etc requirements but should also be highly scalable to connect a huge

Page 73: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 73 of 192 Version: 1.0

number of diverse devices e.g.,50 billion connected devices.in 5G networks. This means relying on overprovisioning the networks to support peak rate demands, as it is done in legacy networks today, will not be an option as this would incur huge OPEX and CAPEX. Instead future networks, need to have imbedded flexibility and resource usage monitoring capability to instantly react to resource demand changes in the network. In the DWDM centric demo, which was part of the distributed NG-PON (dNG-POP) booth in the final integrated demo we have designed and implemented SDN based transport controller and radio controller with multi dolman orchestrator to orchestrate the resources in the radio and transport domains. We have also implemented an application called “Elastic Broadband Mobile Service” (EMBS), and validated the flexibility and programmability of the solution by demonstrating its ability to monitor the traffic demand in the different parts of the network and dynamically allocate RAN and transport resources when a demand arises and release the allocated resources when the requested resources are not used.

Centralized NG-POP Architecture

The definition of the centralized NG-POP architecture by COMBO is sufficiently wide to support multiple implementations. The common features are: UAG located at the Core CO (between the aggregation and the core) and exploitation of the SDN and NFV networking concepts [4]. Herein, we have provided a possible implementation / proof-of-concept of a centralized NG-POP setup compliant with those requirements and achieving the convergence from a twofold perspective:

1. Structural in the aggregation network segment, where a multi-layer (packet and optical switching) infrastructure seamlessly transport any traffic service flow (mobile and fixed).

2. Functional, where a unified control plane for all services (fixed and mobile) is deployed into a centralized SDN orchestrator. This element is able to automatically serve (compute and allocate) the resources to backhauling (fixed and mobile) services. To this end, the SDN orchestrator is able to coordinate a number of independent control plane technologies. In this regard, in the deployed and validated demonstration we jointly orchestrated the LTE/EPC system (VNF into a potential core DC) and the multi-layer aggregation network. By doing so, dynamically mobile bearers are set up and transported transporting between the access and the mobile core domains through the aggregation network.

The benefits of doing the centralized NG-POP approach is that backhaul resources within the aggregation network are allocated in a more effective way according to the traffic necessities of mobile (and fixed) services. The deployed SDN orchestrator allows validating this goal, which may represent an appealing capability for the network operators when deploying their future fixed mobile convergent network and functions. Intrinsic to the above advantage, it is worth stressing the fact that control plane entities (traditionally separated) are being integrated and unified which in turn may lead not only enhancing the use of the network resources for different technologies and network segments but also the possibility to reduce OpEx, provide differentiated services,

Page 74: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 74 of 192 Version: 1.0

increase the availability, etc. On the other hand, centralized approach with respect to the distributed solution may present some drawbacks such as single element/entity processing the control and data plane messages which may restrict the scalability of the whole centralized approach. An extensive comparison from both control and data plane perspectives when either centralized and distributed approaches are considered are thoroughly discussed within COMBO T32 and reported in [4].

Universal Access Gateway (UAG) & Network Function Virtualization (NFV) Server

The use of the NFV server in both distributed and centralized NG-POP approaches provides a number of benefits associated to the migration of network functions from (traditional) dedicated appliances towards VNFs within cloud resources. In this regard, one of the main advantages is for instance easing the vendor interoperability by removal of vendor lock-in. Another appealing feature shown in the set of demos after adopting the NFV concept is the possibility to perform and exploit network sharing (see [4]). The idea is to instantiate different VNFs to independent network operators on top of the same UAG’s NFV server. For instance, and as demonstrated above, in the distributed NG-POP we have instantiated two independent vEPCs implementations. This may represent the mobile core functions of separated mobile network operators supported by a single network provider owning the UAG element. The vEPCs are completely isolated from each other and the allocated resources in terms of memory, CPU, etc. are tailored according to the EPC network necessities.

Therefore, the adoption of NFV for implementing the NG-POP approaches besides facilitating the convergence (VNFs for fixed and mobile services are deployed within the same infrastructure) enables the support of appealing multi-tenancy capability where the same physical resources can be partitioned and provided to independent network services / operators (e.g., the instantiation of multiple and isolated vEPCs for different multi-operators).

Universal Authentication (uAUT)

The results of the uAUT demonstration have shown that it is a feasible solution. We manage to deploy the uAUT as a proxy that redirects the queries to an external Radius server. This integration has taken time as it is not plug and play. The uAUT must be configured as it has to classify the requests and send them to the correct external Radius server. The network needs to be defined correctly in order to manage the traffic and send it to the correct entity. Moreover, we have used successfully EAP-AKA and Hotspot 2.0. Thanks to those technologies we have been able to perform a seamless authentication using the same credentials in both mobile and wireless accesses.

We have done a first proof of concept, but in a larger scenario with more authentication servers, and more access points involved and other kind of accesses integrated in the uAUT, it could have been demonstrated even better that the uAUT is a very practical solution for operators as it allows them to implement several improvements for the user-subscriber paradigm. Moreover, having all the authentication functionalities centralized in one entity simplifies the configuration.

Page 75: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 75 of 192 Version: 1.0

One of the main problems we have faced with is the performance. The UAG is a central entity composed by several virtualized functions. The server that hosts all those functions must be well dimensioned as it needs a lot of resources. Also the network has to be well dimensioned because the bandwidth consumption is high.

Universal Data Path Management (uDPM)

The uDPM DE demonstration implemented in the UAG can be utilized to provide effective data path decision (e.g., load balancing) between heterogeneous access networks of different operators and technologies. However, the current setup does not allow assessing the large-scale feasibility of this offloading and load balancing solution. The results also evidence that MPTCP can be successfully applied for hiding connection changes from applications on UEs with multiple interfaces.

Caching

The Caching demonstration has shown that the deployment of caches closer to the end users can significantly improve the user QoS even the network capacity between the end users and the content server is not high enough. The prefetching is triggered by uDPM DE that is monitoring and detecting the network status on UEs with multiple interfaces. The cooperation between CC and uDPM DE to make the caching and prefetching decision allows a continuous service without the impact of a network handover on user side when uDPM DE detects the network deterioration and decides to switch form one network interface to another.

Multipath Entity

During the uDPM demo, a proof of concept was shown for the validating the functionality of an uDPM multi-path entity (MPE). The setup included a UE connected to the UAG via both a mobile connection over LTE and a Wi-Fi connection. To simulate the behaviour of the network with traffic originating from several UEs, the UE used many, but short, TCP based calls to a content server, CS. It was shown that the load balancing between the connections can be set in detail as of the share of traffic for each. In a larger deployment, controlling several UEs, each connecting to both mobile and Wi-Fi networks, the same principle can be used by the uDPM. The uDPM DE can set individual balancing for each costumer, and optimise the infrastructure utilisation according to specific criteria.

An alternative to load balancing on the IP layer, as in the MPE setup, is the utilization of MPTCP. However both approaches have pros and cons as well. The IP based balancing can in detail control the balancing between the connections to each unit. This puts some more demand on the uDPM DE to optimise over the complete network, but also considering end-user requirements and SLA. In MPTCP the balancing is built into the congestion algorithms, thus balancing according to TCP behaviour. This means it is much harder to set the balancing according to e.g. environmental considerations, or user specific SLA. On the other hand, MPTCP balances the traffic on a packet level while the IP based balancing is session based. Moreover the MPTCP based balancing can be used to cope with session mobility, i.e. splitting sessions over the different connections, which is much harder in the IP based solution.

Page 76: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 76 of 192 Version: 1.0

Demo Integration & General Lessons Learned

The final integration conducted in the planned COMBO demonstration in addition to validating selected COMBO FMC solutions also brings important experiences for all partners. These can be considered as lessons learned. In particular, the integration of COMBO has been pretty challenging. Different network types (i.e., Wi-Fi, mobile and fixed optical) have been integrated as well as a number of networking functions (e.g., authentication, path control, caching, etc.) requiring important skills and knowledge have been combined exploiting new trends (without fully mature state) in SDN and NFV concepts etc. Furthermore, the provided network equipment was brought by different vendors which does complicate the whole interoperability. All these aspects have made the integration complex, and highly time-and effort-consuming. The involved partners, with the support of other consortium members, successfully implemented the integrated FMC demo platform, performed the planned test cases and conducted a public demonstration April 28th in Lannion. A video recording of the demonstration was done and is available on the COMBO website.

Page 77: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 77 of 192 Version: 1.0

6 Conclusion

This deliverable reports the successful execution of the operator test task including the integration of the demo components and execution of joint test and experiments in March and April 2016, culminating in the final public demonstration event on April 28th.

This deliverable addresses the following objectives defined for task 6.3:

1. Preparation of a logistics plan – to provide partners with proposed dates for

shipment of equipment to ORANGE facilities

2. Preparation of a test topology and test plan – to allow operators and partners to

define the sequence of tests to be performed and how the use case

demonstrations of interface selection and handover, as well as simultaneous

support of fixed line and mobile traffic transport through the FMC network would

be undertaken and measured.

3. Demonstration turn-up involving all partners

4. Implementation of tests plans

5. Results recording

6. Analysis of results and sharing with WP3 and ICT ecosystem

All objectives were successfully achieved, as listed below.

1. A logistic plan was prepared with a detailed planning of the shipment and installation of equipment (see document “DemoIntegrationPlanningLannion _2016.xlsx”, available in COMBO document sharing facility)

2. The test topology and test plan was prepared as documented in Milestone MS20.

3. The demonstration turn-up was done by all partners starting mid of March 2016. First, the structural demo parts, the UAG, and the vEPC were installed as core elements used by all other demonstrations. Subsequently, the functional demos were installed locally in Lannion.

4. Following the integration and demo turn-up phase, the planned tests were executed, the performance measurements done, and the observations documented during an operator test phase from April 19 to April 27, where all partners were physically present at Lannion. The test plan and results including measurements with quantitative performance parameters and detailed observations are included in the annex of this deliverable.

5. The show cases presented at the demo booths were video recorded before the demo day. At the same time, the recording served as a rehearsal for the demo day. After editing and cutting, the recorded videos were published on the COMBO website and announced via email distribution.

6. In the period of May to July 2016, the results of the operator test were analysed, and reported in this deliverable. A joint journal paper was prepared and submitted, summarizing the main results of the operator tests. Together with the

Page 78: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 78 of 192 Version: 1.0

deliverable D6.3, the journal paper serves as basis for sharing and discussing the results with WP3 and, where appropriate, with the whole ICT community.

All partners involved in the operator test task collaborated intensively during the demo integration phase, and were present for a joint experimental phase in the two weeks before the demo event, ensuring a seamless and successful operator testing and demonstration.

The concept of structural convergence towards a unified access/aggregation network infrastructure to cater for simultaneous transport of fixed, mobile and Wi-Fi traffic from the access to the central office was successfully demonstrated. For the functional convergence test cases, the infrastructure used for the structural convergence (WDM-PON testbeds) was used to enable the actual (control and data plane) connectivity, through various types of Access Points, between UE devices (smartphones, laptops) and the developed VNFs hosted in the UAG’s NFV server. In this regard, the VNFs were used to successfully validate functional convergence concepts on top of the UAG such as the vEPC, uAUT, uDPM, and Caching functions.

The key results of the operator tests of both the structural and functional convergence concepts as well as the lessons learned and recommendations derived from this activity are summarized in section 5, including:

Development and experimental validation of selected COMBO FMC concepts (defined within WP3) from structural and functional convergence perspective within the UAG/NG-POP concept

Three selected structural convergence technologies were successfully demonstrated, and shown in a distributed scenario with remote connectivity to centralized functions, or used for locally connecting the functional convergence elements

Developments for both distributed and centralized NG-POP architectures were successfully carried out:

o Distributed NG-POP: adopting a split UAG with collocated DP and CP

o Centralized NG-POP: adopting a split UAG with CP distant from DP

UAG architecture support of fixed and mobile CP and DP functions for any access network type (i.e., fixed, mobile, Wi-Fi). Key functional blocks are:

o vEPC: instantiation of EPCs as VNFs in an UAG

o uAUT: unified authentication function (Wi-Fi and mobile)

o uDPM: multi-path and traffic offloading, effective handover and caching

The functional entity (i.e., UAG) designed and deployed, regardless of the NG-POP approach, was done aiming at leveraging nowadays networking trends such as :

o Centralized SDN control:

Page 79: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 79 of 192 Version: 1.0

This has been done in particular in the centralized NG-POP demo. Specifically, negotiated EPS bearers communicate via a NBI to the SDN controller to automatically trigger the backhaul configuration within the aggregation network connecting the RAN and the mobile core.

o Exploitation of the NFV:

This has been adopted in both distributed and centralized NG-POP tests cases. Indeed, multiple VNFs have been instantiated within a physical UAG element. By doing so, real convergence is attained in specific control and data plane functions through the implemented uDPM and uAUT. In addition, the use of VNFs allows also supporting the appealing multi-operator network sharing capability. This means that multiple (but isolated) instances of the same function can be deployed within the same box. For instance, it has been successfully demonstrated that the UAG supports multiple vEPCs owned by different mobile operators.

The demonstrations were successfully shown to the audience in Lannion during the demo event on April 28, which will be reported in detail as part of WP7.

Page 80: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 80 of 192 Version: 1.0

7 References

[1] COMBO Deliverable D6.2, “Detailed Description of COMBO Demonstration”, August 2015.

[2] COMBO Deliverable D3.3, “Analysis of transport network architectures for structural convergence”, V2.0, September 2015.

[3] S. Pachnicke, B. Andrus, A. Autenrieth, "(Invited) Impact of fixed-mobile convergence", Conference on Optical Networks Design and Modeling (ONDM 2016), Cartagena, Spain, May 2016.

[4] COMBO Deliverable D3.5 “Assessment of candidate architectures for functional convergence”, July 2016.

[5] LTE-EPC Network Simulator (LENA), http://networks.cttc.es/mobile-networks/software-tools/lena/.

[6] ONF, “SDN architecture”, Issue 1, ONF TR-502, June 2014.

[7] JP. Vasseur and JL. Le Roux, “Path Computation Element (PCE) Communicaiton Protocol (PCEP)”, IETF RFC 5440, March 2009.

[8] E. Mannie, “Generalized Multi-Protocol Label Switching (GMPLS) Architecture”, IETF RFC 3945, October 2004.

[9] COMBO Deliverable D3.2, “Analysis of horizontal targets for functional convergence”, V2.0, September 2015.

Page 81: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 81 of 192 Version: 1.0

8 Annex 1 - Structural Convergence Test Plan and Results

8.1 WS-WDM-PON Architecture

This section describes the tests performed to validate the Wavelength-Switched WDM-PON architecture. In total, three data plane and three control and management plane test were planned and successfully executed.

8.1.1 WS-WDM_DP_1

Description of the item under test

WS-WDM-PON physical and link layers connectivity evaluation

Test report:

Pass:

Fail:

Test Setup:

Purpose: Evaluate physical and link layer performance of the WS-WDM-PON system, with target symmetric 10G data rates on each ONU

Use case: UC7, UC8

Preconditions: This test is performed without GPON Overlay, nor CPRI transport for the evaluation of the 10G data rate link performance. However, we take into account all the optical components needed for GPON overlay (CEx co-existence filter, 20 km optical fibre link, 1:32 power splitter) for the evaluation of the physical layer of the network and link budget. Four SFP+ optical interfaces are available for these tests at the OLT side for complete optical characterization: 2 for the 2 ONUs, 1 for control plane OOBM (later explained in control plan test cases) and 1 for future CPRI fronthaul transmission (later explained in further data plane test case).

The wavelength plan is as follows (ITU-T 100GHz grid):

Upstream:

Page 82: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 82 of 192 Version: 1.0

o OOBM: C14 1542.94 nm, ONU1: C15 1543.73 nm, CPRI: C16 1544.53 nm, ONU2: C17 1545.32 nm

Downstream:

o OOBM: C34 1558.98 nm, ONU1: C35 1559.79 nm, CPRI: C36 1560.61 nm, ONU2: C37 1561.42 nm

Other conditions and specifications of the system and modules:

WS-WDM-ONU SFP+ wavelength accuracy: ±0.02 nm

64B/66B line coding for error detection

Error correction is not implemented in this version

Encryption is not implemented in this version

Time to WS-WDM-ONU start-up: ~20 s

Time to On OOBM: <10 s

Optical amplification to reach ~20 km

1:32 ratio splitting

WS-WDM-OLT optical output power (min): -1 dBm

WS-WDM-OLT sensitivity: -24 dBm

WS-WDM-ONU optical output power (min): -1 dBm

WS-WDM-ONU sensitivity: -24 dBm

Test equipment:

Optical power meter (C-Band)

10G Traffic generator and monitor

OSA

Test procedure:

1. Evaluate WS-WDM-PON physical layer:

a. Deploy the end-to-end WS-WDM-PON system w/o GPON and configure OLT and ONUs

i. Insert SFP+ modules into the OLT 10G switch

ii. Connect optical fibresfibre from OLT SFP+ modules to the corresponding ports of the AWG Mux/Demux equipment of the OLT

iii. Connect output fibre of the Mux/Demux to the CEx filter

iv. Connect the CEx filter optical output to the 20 km fibre

Page 83: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 83 of 192 Version: 1.0

v. Connect the splitter to the optical fibre

vi. Connect one fibre output of the splitter to each ONU

vii. Switch on the OLT switch and Mux/Demux equipment

viii. Switch on the ONUs

ix. Wait until control plane automatically assigns link to each ONU

b. Connect OSA to each WS-WDM-ONUs in order to visualize the optical signals

c. Analyse the power budget and amplification range

2. Evaluate link layer and validate 10G data rates

a. End-to-end set-up and configuration of the WS-WDM-PON system according to above procedure

b. Setup traffic generator

c. Inject traffic into the PON through Telnet ultra-low latency switch

d. Connect the 10GbE LAN interface of the WS-WDM-ONU to the traffic generator

e. Measure frame losses, and FLR (Frame Loss Rate)

Pass-fail criteria:

Note amplification range needed in OLT

Negligible FLR

Observation:

OLT and ONU min Tx power and sensitivity are shown in the following table to estimate the power budget required:

OLT Optical module sensitivity (dBm)

10G SFP+ module Tx power (min)

0 dBm

10G SFP+ module sensitivity -24

ONU

10G SFP+ module Tx power (min)

0 dBm

Page 84: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 84 of 192 Version: 1.0

10G SFP+ module sensitivity -24

According to the above measurement, both the downstream and upstream optical budgets were about 24 dB. Measured insertion loss for the optical components in the system are:

WS-WDM--PON OLT: 5.5 dB

CEx filter: 0.5 dB

Splitter 1:32: 16 dB

20 km fibre: 10 dB

WS-WDM-PON ONU: 3 dB

TOTAL: 35 dB

In this way, the system requires minimum G = 11 dB gain amplification at the OLT side in both cases.

The following figure represents the upstream and downstream performance of the system as a function of the OLT amplification (G):

Page 85: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 85 of 192 Version: 1.0

For the downstream, we use a booster amplifier with maximum gain of 26 dB, whilst for the upstream, a pre-amplifier mode amplifier is used with a maximum gain of 25 dB. In both cases, the minimum gain is set to 11 dB, which is needed to meet the power budget requirements in both cases. In the figures, the ONU SFP+ (downstream) overload input optical power is shown (-7 dBm), as well as OLT SFP+ (upstream) overload input optical power (-8 dBm), values that delimitate the horizontal boundaries of the above graphs. It can be seen, that in both cases the system can accommodate an amplification range between 11-25 dB. Larger splitting ratios, e.g. 1:64, could be then supported by the system, as it will just imply 3 dB extra loss.

The following figure illustrates SNR of the system with and without GPON overlay as a function of G:

For the case without GPON overlay, the following figure illustrates the SNR for G = 18 dB in one of the ONUs, which is about 23 dB::

Page 86: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 86 of 192 Version: 1.0

As well as the following picture illustrates the four optical channels in the downstream before and after G = 18 dB amplification respectively at OLT and ONU sides:

The centre value deviation of the optical wavelength for each channel with respect to the theoretical ITU-T values is lower than ±1 pm.

Link Performance of the system is analysed at 10 Gb/s symmetric data rates for different amplification values using a 10G Ethernet traffic generator and analyser. The following table illustrates FLR performance for different amplification values in the downstream:

Page 87: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 87 of 192 Version: 1.0

Almost negligible FLR was obtained for G > 18 dB for upstream and uptream. The differences between upstream and downstream in terms of FLR are due to the tunable filter performance, which is far from the ideal case because of calibration deviations.

Page 88: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 88 of 192 Version: 1.0

8.1.2 WS-WDM_DP_2

Description of the item under test

CPRI traffic transmission performance over 10G WS-WDM-PON ONU

Test report:

Pass:

Fail:

Test Setup:

Purpose: Validate CPRI fronthaul support and coexistence with the WS-WDM-PON system

Use case: UC7-UC8

Preconditions: CPRI SFPs works up to 2.7 Gb/s data rates

CPRI SFP optical output power = [0,+4] dBm

CPRI SFP sensitivity = -28 dBm (@2.5 Gb/s)

CPRI SFP SMSR = 30 dB

Wavelength accuracy = λc ±0’05 nm

CPRI Dedicated channel:

o US = 1544.53 nm

o DS = 1560.61 nm

Test equipment: CPRI testers, OSA

Test procedure:

1. Configure OLT and ONUs according to test case WS-WDM_DP_1

2. Setup CPRI testers

3. Inject CPRI traffic into WS WDM-PON through ADVA ultra-low latency switch

4. Measure BER and latency of the CPRI link

Page 89: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 89 of 192 Version: 1.0

Pass-fail criteria:

Fronthaul CPRI requirements

BER < 10-12

Latency < 5 µs

Observation:

04/26/16 06h:52m:56s ===> Port P1 (CR) at 2.4576 Gbps

04/26/16 06h:52m:56s ===> -------------------------------------

04/26/16 06h:52m:56s ===> Transmitted Frames 525,245

04/26/16 06h:52m:56s ===> Received Frames 525,240

04/26/16 06h:52m:56s ===> Total Latency (ns) 91,590

04/26/16 06h:52m:56s ===> Minimum Latency (ns) 91,590

04/26/16 06h:52m:56s ===> Maximum Latency (ns) 91,590

04/26/16 06h:52m:56s ===> Failover (ns) 1,404,222,650

04/26/16 06h:52m:56s ===>

04/26/16 06h:52m:56s ===> Port P2 (CR) at 2.4576 Gbps

04/26/16 06h:52m:56s ===> -------------------------------------

04/26/16 06h:52m:56s ===> Transmitted Frames 525,245

04/26/16 06h:52m:56s ===> Received Frames 525,240

04/26/16 06h:52m:56s ===> Total Latency (ns) 91,590

04/26/16 06h:52m:56s ===> Minimum Latency (ns) 91,590

04/26/16 06h:52m:56s ===> Maximum Latency (ns) 91,590

04/26/16 06h:52m:56s ===> Failover (ns)

Page 90: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 90 of 192 Version: 1.0

In these measurements (see the above figure), the compliance of the WS-WDM-PON system with the CPRI standard has been evaluated at 2.5G line rates. Obtained BER is below 3e-15, and latency lower than 100 ns is obtained. It has been shown that for CPRI options 3 (2.45 Gb/s) the maximum BER and latency specification is fulfilled. This proved the compatibility of the WS-WDM-PON system with the mobile fronthaul requirements.

Page 91: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 91 of 192 Version: 1.0

8.1.3 WS-WDM_DP_3

Description of the item under test

Evaluate the WS-WDM-PON system with GPON overlay on the same ODN

Test report:

Pass:

Fail:

Test Setup:

Purpose: Validate that WS WDM-PON and GPON can coexist in the same fibre, checking the impact of one system to the other

Use case: UC7-UC8

Preconditions: The trials must be performed with GPON Overlay

Proposed wavelength plan:

o 100 GHz grid C-band divided into 2 sub-bands:

US = [1532.68 nm,1547.72 nm]

DS = [1548.51 nm,1563.86 nm]

o Dedicated channels:

US = 1542.94 nm,1543.73 nm,1544.53 nm,1545.32 nm]

DS = [1558.98 nm,1559.79 nm,1560.61 nm,1561.42 nm]

o GPON: US = 1310 nm, DS = 1490 nm

WS-WDM-ONU @US wavelength accuracy equals to (-0.02 nm, λc, +0.02 nm)

64B/66B line coding for error detection

Error correction is not implemented in this version

Encryption is not implemented in this version

Page 92: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 92 of 192 Version: 1.0

Time to WS-WDM-ONU start-up: ~20 s

Time to On OoBM: < 3 s

Optical amplification to reach: ~20 km

1:64 ratio splitting

WS-WDM-OLT optical output power = [-1,+3] dBm

WS-WDM-OLT SMSR = 30 dB

WS-WDM-ONU sensitivity = -24 dBm

WS-WDM-ONU SMSR = 35 dB

GPON SFP OLT optical output power = [+3,+7] dBm

GPON SFP OLT sensitivity = -30 dBm

GPON SFF ONT optical output power = [+0.5,+5] dBm

GPON SFF ONT sensitivity = -27 dBm

Current element of coexistence only supports GPON coexistence (WS-WDM solution totally adaptable to others PON technologies with the corresponding structural and wavelength plan changes)

GPON video RF overlay is not supported by the proposed solution for WS-WDM-PON demo

External receiver GPON filters are not necessary due to internal GPON filters (SFF optical interface)

Test equipment:

Optical power meter (C-Band)

GPON power meter

Traffic generator

OSA

Laptop

TGMS (Telnet GPON Management System) virtual machine integrated in Laptop

Test procedure:

1. Configure OLT and ONUs according to test case WS_WDM_DP_1

2. Connect and configure GPON PLT and ONUs using Telnet GPON Management system (TGMS) virtual machine using the Laptop

3. Validate GPON overlay

Page 93: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 93 of 192 Version: 1.0

a. Connect GPON to the element of coexistence CEx

b. Setup GPON OLT via TGMS (Telnet GPON Management System) in order to define service profile for Video Streaming

c. Setup video server

d. Inject Ethernet traffic into the PON through GPON OLT

e. Check OMCI provisioning

f. Connect PC to the GPON ONU via GbE LAN interface

g. Evaluate traffic at GPON ONU

4. Evaluate WS-WDM-PON with GPON overlay

a. Deploy the end-to-end system w/o GPON

b. Connect OSA to each WS-WDM-ONUs in order to visualize the optical signals

c. Measure wavelength accuracy

d. Determine the insertion losses of the proposed element of coexistence

e. Measure SNR with GPON overlay

f. Compare results with WS-WDM_DP_1

Pass-fail criteria:

Correct provisioning of the GPON system when overlaid to the WS-WDM-PON system

GPON OLT and ONU received optical power >-28 dBm

FLR > 2e-6 in the WS-WDM-PON system with GPON overlay

Observation:

Optical budget and physical link layer analysis of this test case would be equal to the one performed in test case WS-WDM_DP_1.

The following figure illustrates SNR of the WS-WDM-PON system with and without GPON overlay:

Page 94: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 94 of 192 Version: 1.0

In the above Figure, GPON signal power is considered as noise for calculating the SNR of the WDM signal.

It can be seen that the SNR of the WS-WDM-PON link in the case of GPON overlay is about 14.75 dB lower than in the case w/o GPON (e.g. 8 dB for G = 18 dB), when considering GPON signal power as a noise for the WDM signal power. However, this degradation of the SNR in the WDM channels due to the overlay of GPON does not degrade the transmission and the system works perfectly, as we will further describe.

The following figure illustrates the SNR for the WDM and the GPON channels in the downstream for G = 18 dB:

The correct provisioning of the 3 GPON ONUs can be seen in the following figure, captured through the TGMS (Telnet GPON Management System) in the Laptop:

Page 95: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 95 of 192 Version: 1.0

The following figure illustrates the status report of each ONU, captured using TGMS:

Received GPON OLT and ONU optical power are higher than -23 dBm, in the range of what is required according G.984.x GPON ITU-T Recommendation (> -28 dBm).

OMCI configurations for each GPON ONU are also shown in the following figure taken from TGMS (Telnet GPON Management System):

Page 96: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 96 of 192 Version: 1.0

Link Performance of the WS-WDM-PON system with GPON overlay is analysed similarly than in the test case w/o GPON at 10 Gb/s symmetric data rates for different amplification values using a 10G Ethernet traffic generator and analyser. It was also observed that almost negligible FLR was obtained for G>18dB, similar than in the test case WS-WDM_DP_1 without GPON.

GP

ON

Ove

rlay

G

Do

wn

stre

am

FLR

Up

stre

am

FLR

11 1,2 · 10^-4 1,2 · 10^-8

12 2,8 · 10^-5 8,2 · 10^-9

14 5 · 10^-6 6 · 10^-9

16 1 · 10^-7 5,1 · 10^-9

18 6 · 10^-9 1 · 10^-9

20 0 0

The performance of the WS-WDM-PON solution with GPON overlay is very similar to the scenario without coexistence between PON technologies. In addition, both non-coexistence and GPON overlay scenarios require an optical gain about 20 dB in order to achieve the corresponding optical budget with total success.

Page 97: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 97 of 192 Version: 1.0

8.1.4 WS-WDM_CM_1

Description of the item under test

Validation of the Out-of-Band (OoB) management control plane of the WS-WDM-PON system

Test report:

Pass:

Fail:

Test Setup:

Purpose: Evaluate the OoB control plane part of the WS-WDM-PON system

Use case: UC7-UC8

Preconditions: Telnet´s WS-WDM-PON system control plane is based on a hybrid approach (Out-of-Band and In-Band). Out-of-Band is in charge of channel allocation for a new WDM-ONU connection to the PON once plugged-in for the first time. Out-of-Band control stage starts with each ONU start-up. Once the channel negotiation between OLT and ONU has finished, the control and management system passes to In-Band stage. Out-of-Band control and management uses dedicated channels (14,34) for US and DS: US = 1542.94 nm, DS = 1558.98 nm. Two ONUs are connected to the WS-WDM-PON network, using channels defined in Test Case WS-WDM_DP_1.

Time to WS-WDM-ONU start-up: ~20 s

Time to On OoBM: <10 s

The control plane uses control frames encapsulated in Ethernet frames, so it is not valid for CPRI. Only 10Gbit Ethernet traffic is then controlled.

Test equipment: OSA

Traffic Generator

Page 98: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 98 of 192 Version: 1.0

Test procedure: 1. Deploy the end-to-end system according to test case WS-WDM_DP_1

2. Setup traffic generator

3. Inject traffic into the PON through Telnet ultra-low latency switch

4. Connect OSA to each WS-WDM-ONUs in order to visualize the optical signals

5. Connect the 10GbE LAN interface of the WS-WDM-ONU to the traffic generator

6. Start-up one WS-WDM-ONU in order to start the OoB wavelength control stage

7. Observe management channel tuning via OSA

8. Observe data channel tuning (OoBM IBM) via OSA after negotiation

9. Measure transition times in terms of frame losses (via traffic generator)

Pass-fail criteria: Evaluate transitions times

Time to WS-WDM-ONU start-up: ~20 s

Time to On OoBM: < 10 s

Observation:

Once the OLT is switched on and stable, we switch on the ONUs and execute the script #ibm-check-status.sh to check the status of the ONUs and the initial control plane parameters:

From this script execution result we see that, once the two ONUs are switched on, the OOB control plane procedure is then started automatically, and both ONUs are in

Page 99: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 99 of 192 Version: 1.0

OOBM mode but no channel has been assigned to them (indicated by the variable STATE = OOBM (0)). In this phase, the ONUs tune their lasers to channel 14 and their receivers to channel 34 to listen from the OLT dedicated OOB channel what channels are available. This process takes less than 10 seconds. After this time, The OLT tells ONU1 the channel (US:15, DS:35) is free and ONU2 the channel (US:17,DS:37) is free for connecting to the network. OOB negotiation is then started. Once OOB negotiation is finished, the corresponding channels are assigned, which we can see executing once again #ibm-check-status.sh:

ONUs are then attached and synchronized with the OLT, and have switched to IBM mode after OOB negotiation (STATE = IBM (1)). This process to assign available channels to each ONU takes about 20 seconds.

In the following figures we see a picture of the three channels in the upstream involved in OOB negotiation:

Channel 14 (OOB dedicated channel):

Page 100: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 100 of 192 Version: 1.0

Channel 15 (ONU1):

Channel 17 (ONU2):

Page 101: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 101 of 192 Version: 1.0

8.1.5 WS-WDM_CM_2

Description of the item under test

Validation of the In-Band management (IBM) control plane of the WS-WDM-PON system

Test report:

Pass:

Fail:

Test Setup:

Purpose: Validate the in band monitoring (IBM) control plane part of the WS-WDM-PON system for wavelength changes after OOB negotiation

Use case: UC7-UC8

Preconditions: Telnet control and management system is based on a hybrid approach (Out-of-Band and In-Band)

In-Band wavelength control and management stage is in charge of channel tunability (ONUs) due to channel changes requests from OLT and link monitoring

In-Band control stage starts after OoBM negotiation between OLT and ONUs

In-Band control and management is based on encapsulating Ethernet frames

Time to WS-WDM-ONU start-up: ~20 segs

Time to On OoBM: < 10 segs

The control plane uses control frames encapsulated in Ethernet frames, so it is not valid for CPRI. Only 10Gbit Ethernet traffic is then controlled.

Test equipment:

OSA

Traffic Generator

Page 102: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 102 of 192 Version: 1.0

Test procedure:

1. Deploy the end-to-end system according to test case WS-WDM_DP_1

2. Setup traffic generator

3. Inject traffic into the PON through Telnet ultra-low latency switch

4. Connect OSA to each WS-WDM-ONUs in order to visualize the optical signals

5. Connect the 10GbE LAN interface of the WS-WDM-ONU to the traffic generator

6. Start-up one WS-WDM-ONU in order to start the OoB wavelength control stage

7. Wait up to data channel allocation after OoB wavelength control stage

8. Observe management channel tuning via OSA

9. Force an OLT channel change request to a certain ONU

10. Observe data channel tuning via OSA

11. Measure transition times in terms of frame losses (via traffic generator)

12. Force channel change due to link monitoring (e.g. disconnect a fibre connection)

Pass-fail criteria:

Success on channel tunability via IBM

Observation:

Once OOB negotiation is finished, the control plane switches to IBM control plane. The following figure depicts the IBM control frames encapsulated over Ethernet frames in the downstream (OLTONU). Every kind of event, information or state data is transmitted on these frames in the downstream:

Page 103: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 103 of 192 Version: 1.0

Now we force ONUs exchange their transmission channels to each other. This implies to set free one of the available channels, for instance (17,37), which was occupied by ONU2:

Once the instruction to make this channel available is sent, this channel is now free after few seconds so that OOB negotiation can again start for this channel.

Now we can make ONU1 to switch to channel (17,37) that is now free; then leaving the current channel (15,35):

Page 104: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 104 of 192 Version: 1.0

In the following figure we can see that now ONU1 is using channel (17,37) and channel (15,35) is waiting for OOB negotiation:

If we now connect ONU2, first OOB negotiation over channel (15,35) starts, and then automatically the control plane switches to IBM:

Page 105: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 105 of 192 Version: 1.0

Page 106: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 106 of 192 Version: 1.0

8.1.6 WS-WDM_CM_3

Description of the item under test

IBM quality link monitoring

Test report:

Pass:

Fail:

Test Setup:

Purpose: Validate the link monitoring mechanisms of the system regarding the physical status per wavelength

Use case: UC7-UC8

Preconditions: Telnet control and management system is based on a hybrid approach (Out-of-Band and In-Band)

In-Band wavelength control and management stage is in charge of channel tunability (ONUs) due to channel changes requests from OLT and link monitoring

In-Band control stage starts after OoBM negotiation between OLT and ONUs

In-Band control and management is based on encapsulating Ethernet frames

Current link monitoring approach is based on physical metrics as loss-of-signal

Time to WS-WDM-ONU start-up: ~20 s

Time to On OoBM: < 10 s

The control plane uses control frames encapsulated in Ethernet frames, so it is not valid for CPRI. Only 10 Gb/s Ethernet traffic is then controlled.

Page 107: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 107 of 192 Version: 1.0

Test equipment:

OSA

Traffic Generator

Test procedure:

1. Deploy the end-to-end system according to test case WS-WDM_DP_1

2. Setup traffic generator

3. Inject traffic into the PON through Telnet ultra-low latency switch

4. Connect OSA to each WS-WDM-ONUs in order to visualize the optical signals

5. Connect the 10GbE LAN interface of the WS-WDM-ONU to the traffic generator

6. Start-up one WS-WDM-ONU in order to start the OoB wavelength control stage

7. Wait up to data channel allocation after OoB wavelength control stage

8. Observe management channel tuning via OSA

9. Observe data channel tuning via OSA

10. Force channel change due to link monitoring (e.g. disconnect a fibre connection)

11. Measure transition times in terms of frame losses (via traffic generator)

Pass-fail criteria:

Success on channel tunability via IBM due to loss of signal

Observation:

Once OOB negotiation is finished, the control plane switches to IBM control plane. IBM approach presents two different purposes:

Communication between OLT and ONUs in order to manage the corresponding requests and orders from OLT (e.g. dynamic wavelength allocation)

Quality link monitoring to prevent poor performances and even drops in links

In this test case the link monitoring issue is explained here thanks to the following example:

Page 108: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 108 of 192 Version: 1.0

In the previous images we can see one WS-WDM-ONU tuned to channel (17,37) and the corresponding IBM Ethernet frames, the other channel is free and ready to be assigned to a new client connection. Then, 10G traffic is injected into channel (17,37), the next capture shows how the traffic is routed from INPUT 10GbE port (xgi.5) to the OUTPUT port 10GbE (xgi.17) through the (17,37) link, note that the counters increase iteratively:

Page 109: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 109 of 192 Version: 1.0

Page 110: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 110 of 192 Version: 1.0

Let’s force a loss of signal decreasing the optical power level by performing fibre curvatures in order to demonstrate the IBM link monitoring. When channel (17,37) detects problems in the line, the management communication between the OLT and the ONU starts:

Once the system detects critical problems in the communication, it triggers the release of channel (17,37):

Page 111: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 111 of 192 Version: 1.0

The next stage is the transition from IBM to OoBM:

Finally, the ONU is tuned to channel (15,35) in order to continue the transmission:

Page 112: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 112 of 192 Version: 1.0

At this point, the 10G traffic is being routed via channel (15,35). The next figure shows how the counters (xgi.15) of this link increase with the time, whilst counters (xgi.17) of link (17,37) are fixed now:

Page 113: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 113 of 192 Version: 1.0

Page 114: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 114 of 192 Version: 1.0

Page 115: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 115 of 192 Version: 1.0

8.2 WR-WDM-PON architecture

This section describes the tests performed to validate the Wavelength-Routed WDM-PON architecture. In total, seven data plane and two control and management plane test were planned and executed. With one exception, all tests were successfully executed and passed. One test (WR-WDM-PON_DP_4, maximum reach evaluation) could not be performed.

8.2.1 WR-WDM-PON_DP_1

Description of the item under test

OLT optical interface feeder characterization

Test report:

Pass:

Fail:

Test Setup:

ADVA WDM-PON OLT

SFP and SFP+ Optical interfaces are plugged in the Network Card

These interfaces are connected to C/L bands filters (MUX/DMUX) which manage downstream and upstream streams

Filters COM interfaces are connected the Wavelength Control Board (WCB)

Purpose: Characterization of the different channels:

Wavelength measurement

Optical output power evaluation (in an optical bandwidth of 1 nm)

ODN (Remote Node) Wavelength Control Board

Network Card

C and L band Filters

Page 116: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 116 of 192 Version: 1.0

Extinction Ratio evaluation

Use case: UC 7 – UC 8

Preconditions: Filters COM interfaces are connected to Input ports of WCB. Downstream and upstream paths are established. Three optical interfaces are available for these tests, 2 SFP optical modules and a SFP+ one.

SFP/SFP+ optical interfaces central wavelengths are in the range [1574.54 nm and 1576.2 nm]

Test equipment: OSA

Oscilloscope

Test procedure: 1. Insert SFP optical interface corresponding to the wavelength 1575.37 nm in the network card and connect optical fibres to C/L band filters to dedicated C/L bands filters interfaces

2. Connect OSA to WCB output port and measure channel wavelength and optical power (in 1 nm optical bandwidth)

3. Replace OSA by oscilloscope and Clock Recovery (used in order to synchronize Oscilloscope

4. Perform Eye diagram measurement to measure Extinction Ratio (value to be compared with ER values defined in ITU-T G.698.3)

5. Disconnect SFP TX interface from L band filter and insert SFP optical module corresponding to wavelength 1576.2 nm in the network card and connect optical fibres to dedicated C/L bands filters interfaces

6. Repeat steps 2, 3 and 4

7. Disconnect second SFP TX interface from L band filter and insert SFP+ optical module with wavelength 1575.37 nm in the network card. Disconnect optical fibres from SFP optical module already connected to C/L bands filters in order to connect to C/L bands filters SFP+ optical module

8. Repeat steps 2, 3 and 4

Page 117: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 117 of 192 Version: 1.0

Pass-fail performances criteria:

Output optical power , the wavelength, the Extinction Ratio

Observation:

The SFP/SFP+ OLT optical modules were characterized by the wavelengths 1574.54 nm, 1575.37 nm and 1576.2 nm. The output optical power was about 0.2 dBm, 3.6 dBm and 5.4 dBm, respectively. As shown in the above optical spectrum (OSA port), in total three wavelengths (for three ONUs) were autonomously tuned at their centre grids respectively. The SMSR of each wavelength was greater than 42 dB (with EDFA):

C9 optical module (1575.37 nm) C10 optical module (1576.2 nm)

As observed from the eye diagrams, the extinction ratio of Gigabit Ethernet interfaces was about 11 dB for C9 (1575.37 nm) and almost 10 dB for C10 (1576.2 nm), compliant with ITU-T.G.698.3.

Page 118: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 118 of 192 Version: 1.0

8.2.2 WR-WDM-PON_DP_2

Description of the item under test

Gigabit Ethernet transmission over 1 Gigabit WDM-PON ONUs

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT with Gigabit optical modules

WDM-PON ONU equipped with Gigabit tunable optical modules

Remote Node

Ethernet traffic generator

Purpose: Check bidirectional Ethernet connectivity over WDM-PON OLT and ONUs. Measure bidirectional throughput and latency.

Use case: UC 7 – UC 8

Preconditions: OLT optical modules are inserted in the network card and connected to C/L band filters and are measured at the WCB output port

OLT output port is connected to RN COM port

1 Gigabit WDM-PON ONU 1 is connected to port 2

1 Gigabit WDM-PON ONU 2 is connected to port 3

Ethernet generator is connected to OLT optical interfaces and to electrical Gigabit Ethernet ones

Test equipment:

Ethernet generator

1G WDM-PON ONU /

DSnm

USnm WDM-PON

OLT

nm

nm

1G WDM-PON ONU /

DSnm

USnm OSA

Laptop

Ethernet Generator

Page 119: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 119 of 192 Version: 1.0

Optical Spectrum Analyser

Laptop

Test procedure: 1. Optical connectivity is established between WDM-PON OLT and

WDM-PON ONUs connected to ports 2 and 3. Check via OSA upstream wavelengths and via Laptop to measure Tone frequency deviation f2 and f3

2. Generate (VLAN Tagged) Ethernet traffic on Gigabit OLT interfaces and ONU ones. Check there is no frame loss neither in downstream nor in upstream

3. Once bidirectional connectivity is checked, generate bidirectional traffic up to frame loss. Measure the frame loss rate and decrease the bidirectional data bit rate by the frame loss rate. Check there is no frame loss and measure over 300 s whether there is traffic loss or not. Note associated latency.

Pass-fail performances criteria:

Bidirectional data bit rate and latency values

Observation:

Error-free transmission was achieved for both wavelength channels, with the latency less than 1 ms.

Page 120: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 120 of 192 Version: 1.0

8.2.3 WR-WDM-PON_DP_3

Description of the item under test

WDM-PON optical budget measurement

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT with Gigabit (2) and 10 Gigabit optical

modules (1)

Remote Node

WDM-PON ONUs

Ethernet traffic generator

Purpose: Evaluate WDM-PON optical budget of optical modules working at 1G and 10G

Use case: UC 7 – UC 8

Preconditions: OLT optical modules corresponding to 1574.54 nm (10 Gigabit) and 1575.37 nm (1 Gigabit) are plugged into the Network card and connected to OLT C/L band filters.

OLT output port is connected to RN COM port via a variable optical attenuator

10 G WDM-PON ONU is connected to port

1 G WDM-PON ONU is connected to port

Test equipment: Optical Spectrum Analyser

nm

nm

Laptop

WDM-PON OLT

10G WDM-PON ONU

DSnm

USnm

1G WDM-PON ONU

DSnm

USnm

OSA

VOA

Ethernet Generator

Page 121: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 121 of 192 Version: 1.0

Laptop

Ethernet generator/analyser

Test procedure: 1. Connect 10G WDM-PON ONU to port 1 and 1G WDM-PON

ONU to port 2. Check wavelengths and frequencies are stable.

2. Generate bidirectional Ethernet traffic on each wavelength

3. Increase the optical attenuation up to observe frame loss

either on wavelength 1 or 2

4. The optical budget is the optical budget for which frame loss have first been observed in downstream or upstream

Pass-fail criteria: Note maximum optical budget for port working at 1 Gb/s and for port working at 10 Gb/s

Observation:

OLT Optical module sensitivity (dBm)

OLT 1 Gb/s / 1575.37 nm -35.4

OLT / 10 Gb/s / 1574.54 nm -22.4

ONU

ONU 1 Gb/s MZ13690 -32

ONU 10 Gb/s -19.6

According to the above measurement, the downstream optical budget was about 33 / 37 dB for 1 Gb/s ONU and 19.8 dB for 10 Gb/s ONU, respectively. For the upstream, the optical budget was about 36 / 37 dB at 1 Gb/s and 21 dB at 10 Gb/s, respectively. The optical budgets were measured between the output of the OLT optical module and the input of the ONU.

Page 122: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 122 of 192 Version: 1.0

8.2.4 WR-WDM-PON_DP_4

Description of the item under test

WDM-PON maximum reach evaluation

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT with Gigabit (3) and 10 Gigabit optical

modules (1)

Remote Node

WDM-PON ONUs (1G and 10G)

Ethernet traffic generator

Purpose: Evaluate WDM-PON maximum reach for ONUs working at 1 Gb/s and 10 Gb/s

Use case: UC 7 – UC 8

Preconditions: OLT optical modules corresponding to 1574.54 nm (10 Gigabit) and 1576.20 nm (1 Gigabit) are plugged into the network card and connected to OLT C/L band filters.

OLT output port is connected to RN COM port via optical fibre coils

10G WDM-PON ONU is connected to port

1G WDM-PON ONU is connected to port

Test equipment: Optical Spectrum Analyser

WDM-PON OLT

nm

nm

10G WDM-PON ONU /

DSnm

USnm

1G WDM-PON ONU

/ DSnm

USnm

OSA

Laptop

Optical fiber

Ethernet Generator

Page 123: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 123 of 192 Version: 1.0

Laptop

Ethernet generator/analyser

Test procedure: 1. Connect 10G WDM-PON ONU to port 1 and 1G WDM-PON

ONU to port 3. Check wavelengths and frequencies are stable.

2. Generate bidirectional Ethernet traffic on each wavelength

3. Increase the optical fibre length between OLT COM port and Remote Node one. Observe ONU synchronization and impact on the Ethernet traffic.

Pass-fail criteria: Note maximum reach for each optical module type. Dissociate chromatic dispersion limitations from sensitivity ones (optical budget limitation)

Observation:

We did not manage to conduct the test case as it was not feasible to increase the optical fibre length between the OLT COM port and the remote node. The available fixed fibre infrastructure with the Lannion fibre ring were used for connecting the ONUs with the OLT. The ONUs were used to connect the user equipment of the functional demos with the UAG. Therefore, a change of the fibre length to evaluate the maximum reach would have interrupted the functional tests.

Page 124: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 124 of 192 Version: 1.0

8.2.5 WR-WDM-PON_DP_5

Description of the item under test

CPRI traffic transmission over 10G WDM-PON ONU

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT 10 Gigabit optical modules (1)

20 km of optical fibre

Remote Node

10G WDM-PON ONUs

CPRI traffic generator/analyser

Purpose: Evaluate 10G WDM-PON performances with respect to CPRI traffic transport

Use case: UC 7 – UC 8

Preconditions:

OLT optical modules corresponding to 1574.54 nm (10 Gigabit) is plugged into the Network card and connected to OLT C/L band filters

OLT output port is connected to RN COM port via 20 km of optical fibre

10 G WDM-PON ONU is connected to port

Test equipment:

Optical Spectrum Analyzer

Laptop

WDM-PON OLT

10G WDM-PON ONU /

DSnm

USnm

nm

Laptop

OSA

20 km

CPRI Generator

Page 125: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 125 of 192 Version: 1.0

CPRI generator/analyzer

Test procedure:

1. Connect 10G WDM-PON ONU to port 1. Check wavelength and frequency are stable

2. Generate bidirectional CPRI traffic at different CPRI data rates on 10 G WDM-PON ONU

3. Measure at each CPRI data bit rate, the frame loss and latency

Pass-fail criteria:

Note CPRI frame loss and latency for CPRI data line rates (2.45 Gb/s - 4.91 Gb/s - 6.14 Gb/s – 9.83 Gb/s)

Observation:

2.45 Gb/s 4.91 Gb/s 9.83 Gb/s

Maximum values according to

CPRI specification

Deterministic jitter at transmitter

58.2 mUI 139.9 mUI 192.3 mUI 170 mUI

Total jitter at transmitter 71.7 mUI 139.9 mUI 192.3 mUI 350 mUI

Deterministic jitter at receiver

103.5 mUI 199.8 mUI 383.2 mUI 370 mUI

Combined deterministic and random jitter at receiver

135.9 mUI 271.7 mUI 536.8 mUI 550 mUI

Total jitter at receiver 136 mUI 271.5 mUI 537.8 mUI 650 mUI

In these measurements (see the above table), the compliance of the WDM-PON system with the CPRI standard has been evaluated at different line rates. It has been shown that for CPRI options 3 (2.45 Gb/s) and 5 (4.91 Gb/s) the maximum jitter specification is fulfilled. In the measurements the maximum admissible jitter values have been exceed slightly only for a line rate of 9.83 Gb/s (CPRI option 7), however, that could easily be fixed by the use of a clock recovery at the ONU side. The latency introduced by the WDM-PON system was about 130 ns. This proved the compatibility of the WR-WDM-PON system with the mobile fronthaul requirements.

Page 126: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 126 of 192 Version: 1.0

8.2.6 WR-WDM-PON_DP_6

Description of the item under test

CPRI transport, EVM and jitter measurement

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT 10 Gigabit optical modules (1)

20 km of optical fibre

Remote Node

10G WDM-PON ONUs

LTE signal generator and Electrical Spectrum Analyser

Purpose: Measure impact of the WDM-PON system on EVM and jitter

Use case: UC 7

Preconditions: OLT optical modules corresponding to 1574.54 nm (10G) is plugged into the network card and connected to OLT C/L band filters

OLT output port is connected to RN COM port via 20 km of optical fibre

10G WDM-PON ONU is connected to port

Test equipment: Optical Spectrum Analyser

Laptop

LTE generator and electrical spectrum analyser

nm

Laptop

WDM-PON OLT

10G WDM-PON ONU /

DSnm

USnm

OSA

20 km

LTE Generator

Electrical Spectrum Analyzer

P1

P2

P3

Page 127: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 127 of 192 Version: 1.0

Test procedure: 1. Connect 10G WDM-PON ONU to port

2. Configure Rohde & Schwarz LTE generator

3. Use Electrical Spectrum Analyzer and Rohde & Schwarz applications to measure EVM and jitter at the output of the ONU

Pass-fail criteria: Measure EVM and jitter for different data line rates (2.45 Gb/s – 4.91 Gb/s – and 9.83 Gb/s)

Observation:

EVM were measured by using LTE test model E-TM3.1. For this test model, 64-QAM modulation format which has the most stringent EVM requirement according to 3GPP standard (< 9%) was used. The EVM measurement was performed while attenuating the received optical power in the ONU down to -25.5 dBm (this value is corresponding to a CPRI BER of 10-10). For all the tested data bit rates and a received power greater than -25.5 dBm, the measured EVM was equal to 0.07%. This value meets the 3GPP required value (9%). For a received power lower than -25.5 dBm, the required EVM value was exceeded.

Jitter measurement was carried out at the OLT output (P1 point) which was taken as the reference. At the output of WCB card (P2), jitter was measured to quantify the impact of the adding downstream tone (ECC). Finally, jitter was measured at the input of the analyser (P3 point). Results are summarized in the following table:

Measurement at 2.45 Gb/s

P1 P2 P3

Total jitter 71.7 mUI 78.0 mUI 136.mUI

Random jitter 14.1 mUI 11.2 mUI 32.4 mUI

Deterministic jitter 58.2 mUI 67.2 mUI 103.5 mUI

Measurement at 4.91 Gb/s

P1 P2 P3

Total jitter 139.9 mUI 104.3 mUI 271.5 mUI

Random jitter 0 mUI 0 mUI 71.9 mUI

Deterministic jitter 139.9 mUI 104.3 mUI 199.8 mUI

Measurement at 9.83 Gb/s

Page 128: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 128 of 192 Version: 1.0

P1 P2 P3

Total jitter 192.3 mUI 173.4 mUI 537.8 mUI

Random jitter 0 mUI 19.7 mUI 153.6 mUI

Deterministic jitter 192.3 mUI 153.0 mUI 383.2 mUI

Page 129: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 129 of 192 Version: 1.0

8.2.7 WR-WDM-PON_DP_7

Description of the item under test

Transport of LTE signal radio and frequency deviation

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT 10 Gigabit optical modules (1)

20 km of optical fibre

Remote Node

10G WDM-PON ONUs

Electrical Spectrum Analyser

Commercial BBU and RRH

Purpose: Measure impact of the WDM-PON system on LTE signal transport by frequency deviation over time

Use case: UC 7

Preconditions:

OLT optical modules corresponding to 1574.54 nm (10 Gigabit) is plugged into the Network card and connected to OLT C/L band filters

OLT output port is connected to RN COM port via 20 km of optical fibre

10 G WDM-PON ONU is connected to port

Test equipment: Optical Spectrum Analyser, Laptop, LTE generator and electrical spectrum analyser

nmLaptop

WDM-PON OLT

10G WDM-PON ONU /

DSnm

USnm

OSA

20 km

BBU

RRH

Page 130: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 130 of 192 Version: 1.0

Test procedure:

1. Connect BBU to WDM-PON OLT

2. Connect RRH to 10G WDM-PON ONU

3. Connect an electrical spectrum analyser to the RRH output

4. Measure frequency deviation during 12 hours

Pass-fail criteria:

Observe 2.65 GHz LTE radio signal frequency deviation during 12 hours. It has to follow 3GPP specifications (deviation < 50 ppb)

Observation:

In the back to back configuration, frequency deviation was equal to ±1.4 ppb. When the WDM-PON was inserted, 99.99% of the measured points presented a frequency deviation of ±2 ppb. The few remaining points had a frequency deviation up to 20 ppb. The reason for the deviation are not clear, it may be due to the electric jitter, or a fluctuation of the optical power, or ONU wavelength deviation. Nevertheless, these values remained below the maximal value defined by the 3GPP standard (±50 ppb)

Page 131: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 131 of 192 Version: 1.0

8.2.8 WR-WDM-PON_CM_1

Description of the item under test

1 Gb/s WDM-PON ONUs connection to the OLT via the Remote Node

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT with Gigabit optical modules

WDM-PON ONU equipped with 1 Gigabit tunable optical modules

Remote Node

Laptop to follow Tone frequency deviation

Purpose: Observe ONU wavelength tunability and measure the convergence time for each wavelength

Use case: UC 7 – UC 8

Preconditions: OLT optical modules are inserted in the network card and connected to C/L band filters

OLT output port is connected to RN COM port

WDM-PON ONU 1 is connected to port 2

WDM-PON ONU 2 is connected to port 3

Test equipment: Optical Spectrum Analyser, Laptop

Test procedure: 1. Switch on WDM-PON ONU 1 and observe via OSA connected to WCB the upstream wavelength stabilization around 1531.12 nm

2. Observe via ADVA laptop tone frequency position around f2

WDM-PON OLT

nm

nm

1GWDM-PON ONU /

DSnm

USnm

1G WDM-PON ONU

/ DSnm

USnm

OSA

Laptop

Page 132: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 132 of 192 Version: 1.0

3. Switch on WDM-PON ONU 2 and observe via OSA connected to WCB the upstream wavelength stabilization around 1532.08 nm

4. Observe via ADVA laptop tone frequency position around f1

Pass-fail performances criteria:

WDM-PON ONU 1 is stabilized at 1530.38 nm (f < x GHz)

WDM-PON ONU 2 is stabilized at 1532.08 nm (f < x GHz)

Observation:

Both two 1G ONUs were able to be tuned to the correct wavelengths within around 180 s.

Page 133: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 133 of 192 Version: 1.0

8.2.9 WR-WDM-PON_CM_2

Description of the item under test

10G WDM-PON ONU tunability capabilities

Test report:

Pass:

Fail:

Test Setup:

WDM-PON OLT with Gigabit (3) and 10 Gigabit optical

modules (1)

Remote Node

WDM-PON ONUs

Purpose: Evaluate 10G WDM-PON tuning capacity

Use case: UC 7 – UC 8

Preconditions: OLT optical modules corresponding to 1574.54 nm (10 Gigabit) and 1576.20 nm (1 Gigabit) are plugged into the Network card and connected to OLT C/L band filters.

OLT output port is connected to RN COM port

Test equipment: Optical Spectrum Analyzer

Laptop

Test procedure: 1. Connect 10G WDM-PON ONU to port 1 and check via wavelength (OSA) and frequency (laptop) measurement, ONU

is able to tune and stabilize its wavelength (1US = 1530.38 nm)

2. Connect 10G WDM-PON ONU to port 3 (1G OLT optical module) and check via wavelength (OSA) and frequency (laptop) measurement, ONU is able to tune and stabilize its

wavelength (3US = 1532.08 nm)

Laptop

WDM-PON OLT

nm

nm

10G WDM-PON ONU /

DSnm

USnm

1G WDM-PON ONU /

DSnm

USnm

OSA

Page 134: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 134 of 192 Version: 1.0

Pass-fail criteria: 10G WDM-PON ONU is able to tune its wavelength when

connected to port 1

10G WDM-PON ONU is able to tune its wavelength when

connected to port 3

Observation:

The 10G ONU was able to be tuned from 1530.38nm towards 1532.08nm in 180 s.

Page 135: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 135 of 192 Version: 1.0

8.3 DWDM-centric architecture

The test plan for DWDM centric architecture contains three data plane and one control and management test cases.

8.3.1 DWDM-Centric_DP_1

Description of the item under test

Evaluation of filters used to block interference signals at the WSS ports in the DWDM-centric solution.

Test report:

Pass:

Fail:

Test Setup: DWDM-Centric solution with reflection blocking AWG filters

Purpose: Test and validation of the DWDM-centric demo platform upgraded with AWG filters in order to block interfering reflection signals

Use case UC 5

Preconditions: The DWDM-centric demo platform is based on a bi-directional single fibre solution, which has shown some reflection induced impairments and interference between the different channels. We have updated the set-up with optical filters (AWGs) to limit these impairments.

Test equipment:

Optical signal generator

Optical Spectrum analyser

BER equipment

Page 136: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 136 of 192 Version: 1.0

Test procedure:

1. Deploy the DWDM centric platform

2. Measure the reflection a the WSS port

3. Alt connect traffic generators

4. Measure the frame losses

Pass-fail criteria:

Pass: The reflection is reduced in fact eliminated (compared to the old set-up) reported in D6.2 [1]

Observation:

The insertion of AWG filters instead of circulators in the set-up effectively blocked the interference between the channels due to reflections at the WSS ports. Spectrum analyser measurements at the different ports of the WSS show no measurable interference signal between the WSS ports.

Page 137: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 137 of 192 Version: 1.0

8.3.2 DWDM-Centric_DP_2

Description of the item under test

Evaluation and selection of the transport of synchronization signal delivery to the RBSs.

Test report:

Pass:

Fail:

Test Setup: same as DWDM-Centric_DP_1

Purpose: Normally the RBSs in the set-up get their clock reference signal from NTP or PTP servers located at the EPC. During the demo in Lannion, the fact that the RBSs in Lannion were connected to an EPC in Stockholm over the Internet, means that NTP/PTP based synchronization solution might not have worked. In this test we test and validate packet based and GPS based reference signals and chose among two alternative solutions based on the test results.

Use case UC 5

Preconditions: Availability of GPS based clock reference signal

Test equipment: Alt-1:

Packet based NTP sync over the Internet

RBS Digital Unit

Alt-2:

GPS Receiver Unit (GRU) 0201

GPS Antenna/LNA (Low Noise Amplifier),

RBS Digital Unit

Test procedure: Test and validate the GPS receivers:

1. Mount the antennas in positon for LOS

2. Connect the GPS receiver to the RBS

3. Connect the RAN to the EPC

4. Observe the synchronization status of the RBS

Pass-fail criteria:

Pass: The RBS clock and is locked to either the NTP reference signal or the 1 PPS reference signal from the GPS receiver.

Page 138: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 138 of 192 Version: 1.0

Observation:

A test with NTP sync traffic from the core network over the internet, showed only intermittent sync support and could not be used. With the GPS sync signal however, once the system clock is locked to the GPS signal enters the locked mode, it remained in the locked state during the whole demo period, as shown in the sync status report below:

160427-17:44:24 10.21.64.10 11.0p ERBS_NODE_MODEL_E_1_63

stopfile=/tmp/4375

SystemClock: LOCKED_MODE

-------------------------------------------------------

Prio Activity RefState AdmState OpState SyncReference

-------------------------------------------------------

1 ACTIVE OK UNLOCKED ENABLED

Subrack=1,Slot=1,PlugInUnit=1,TimingUnit=1,GpsSyncRef=1

2 INACTIVE OK UNLOCKED ENABLED

IpAccessHostEt=1,IpSyncRef=1 (ntpServerIp=10.47.6.47)

Page 139: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 139 of 192 Version: 1.0

8.3.3 DWDM-Centric_DP_3

Description of the item under test

Evaluation of the connectivity and configuration of the VPN, the RAN and the transport network to EPC VPN connection

Test report:

Pass:

Fail:

Test Setup: Test of RAN to EPC VPN connection

Purpose: The purpose of this test to verify the VPN connection between the RAN in Lannion and the EPC in Stockholm

Use case UC 5

Preconditions: The RBS clock is synchronized and the DWDM platform is configured for connecting the RAN and the EPC

Test equipment:

RBSs

DWDM centric access aggregation transport platform

VPN and firewall service Gateway

EPC

Test procedure:

Connect the RAN and the EPC using the DWDM centric platform using the gateways over the internet and verify the connection and the configured cells are up and running.

Pass-fail criteria:

Pass: All the configured cells are up and running

Observation: The cell status report below shows that the 3 configured cells are up and running, validating the RAN, VPN and the EPC. 160427-17:43:16 10.21.64.10 11.0p ERBS_NODE_MODEL_E_1_63 stopfile=/tmp/4375

====================================================================

Proxy Adm State Op. State MO

====================================================================

1243 1 (UNLOCKED) 1 (ENABLED) ENodeBFunction=1,EUtranCellFDD=1

1268 1 (UNLOCKED) 1 (ENABLED) ENodeBFunction=1,EUtranCellFDD=2

1293 1 (UNLOCKED) 1 (ENABLED) ENodeBFunction=1,EUtranCellFDD=3

1318 1 (UNLOCKED) 0 (DISABLED) ENodeBFunction=1,EUtranCellFDD=4

Page 140: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 140 of 192 Version: 1.0

8.3.4 DWDM-Centric_CM_1

Description of the item under test

Evaluation of the RAN and transport controllers implemented in the demo set-up

Test report:

Pass:

Fail:

Test Setup: Test and validation of the use case 5

Purpose: The purpose of this test is to validate the controllers’ and demonstrate the ability of the solution to dynamically allocate transport and RAN resources to RBSs.

Use case UC 5

Preconditions: The RBS clock is synchronized, the VPN connection between RAN and EPC is established and the cells are up and running.

Test equipment:

Video streaming in MacroCell1 & MacroCell2

Traffic monitoring connection to the RBSs

SDN based Control and orchestration

Elastic mobile broadband service (EMBS)

Test procedure:

1. Increase the traffic throughput in MacroCell1

2. When traffic in MacroCells1 reaches a specified threshold level, the EMBS, thru the transport controller, configures transport resources to SmallCell1.

3. Start SmallCell1 to offload traffic from MacroCel1

4. The traffic in MacroCell2 increases over a given threshold, while in MacroCell1 the traffic decreases under a specified threshold.

5. The EMBS, which is monitoring the traffic in MacroCells1 & MacroCell2, releases the transport & BBU resources from SmallCell1 and configures transport & BBU resources for SmallCells2 to offload MacrcoCell2.

Pass-fail criteria:

Pass: The EMBS flexibly allocates transport and BBU resources to offload Cells with high traffic demands.

Observation:

Activation and de-activation of the small cells to offlead the traffic capacity in the macro cells in the different areas, was tested and validated. In the Figure below as the traffic

Page 141: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 141 of 192 Version: 1.0

demand in residentail areas (green) surpassed that of the business area (blue), a small cell was activated to offload traffic from the macro cell in the residential area.

Page 142: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 142 of 192 Version: 1.0

9 Annex 2 - Functional Convergence Test Plan and Results

9.1 UAG Functionality

This test aims at validating the correct configuration and behaviour of the UAG. Two test cases are defined: the validation of the remote connectivity to the UAG via OpenVPN, and the validation of L2 access, interconnection and service VLAN connectivity.Both test cases passed.

9.1.1 Test case UAG_ TC_1

Description of the item under test

Validation of remote connectivity to the UAG via OpenVPN

Test report:

Pass:

Fail:

Test Setup: OpenVPN tunnels are used to provide remote connectivity to the UAG (specially the NFV server) to all COMBO WP6 partners. The OpenVPN server is hosted at the CTTC premises and is used to push routes to all clients (AITIA, FON, JCP and BME) with destination to the UAG LAN subnet (located at ADVA labs):

Purpose: The aim was to ensure that partners developing their corresponding VNFs to be integrated into the UAG (e.g, vEPC, uAUT, uDPM DE, etc.) were able to access the NFV server (remotely) prior to rehearsal and final demonstration in Lannion. By doing so, partners could gain familiarity with the OpenStack tool used for developing the NFV server. Therefore, the purpose was just to accelerate the process of integration to be done in Lannion providing this connectivity among the different partner sites.

Use case: Distributed NG-POP, UC5-UC7, UC8

5

Internet

OpenVPN Server @CTTC

OpenVPN @ ADVA

OpenVPN Client

@ at AITIA

84.88.62.10077.111.90.94

194.199.209.61

eth0

Emulated RANeNB

NFV Server

@ Lannion

openVPN Tunnel

Udp port 1196

10.0.36.1

10.0.36.106

10.0.36.118

10.0.36.4

Subnet

192.168.151.128/25???

OpenVPN Client

@ at JCPC

OpenVPN Client

@ at BME

OpenVPN Client

@ at FON

195.16.128.138

10.0.36.11010.0.36.112

Page 143: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 143 of 192 Version: 1.0

Preconditions: OpenVPN application is installed on each COMBO WP6 partner device in order to gain remote access; secure certificates and private keys must be generated and shared between OpenVPN server and client administrators

Test equipment:

Additional VM for hosting the openVPN application

Regular Linux networking tools such as ping and traceroute

Test procedure:

1. OpenVPN application is installed on a VM running Ubuntu14.04.

2. Routing is configured on the machine in order to forward the traffic to and from the services running on the UAG.

3. Traceroute and ping are used in order to test the paths and connectivity between UAG LAN and partners’ remote LANs;

Pass-fail criteria:

Pass: Routes are present in the UAG routing table and the ping is able to reach the remote LANs; and additional condition must be achieved in order to pass bidirectional connectivity: a ping command sourced from a remote LAN must reach the UAG as well.

Fail: Otherwise.

Observation:

The conducted tests verified that the CTTC server (where the LTE eNodeB is launched) is able to reach the subnet @ ADVA (192.168.151.128/25) where the CTTC vEPC is instantiated. At ADVA’s UAG NFV server, the vEPC developed by CTTC is hosted, and the CTTC eNodeBs can reach this mobile core elements using the OpenVPN tunnels. The connectivity between CTTC eNodeBs and the vEPC at the ADVA’s UAG is measured by basic ICMP ping to be around 70ms.

Page 144: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 144 of 192 Version: 1.0

9.1.2 Test case UAG_ TC_2

Description of the item under test

Validation of L2 access, interconnection and service VLAN connectivity

Test report:

Pass:

Fail:

Test Setup: VLAN tagging is used to separate and identify traffic not only coming in the UAG but also inside the UAG.

Based on tags, forwarding rules can be made from the SDN controller for:

the carrier Ethernet switch aggregating the incoming traffic coming into the UAG

Open vSwitch inside the NFV server

Purpose: Ensure that traffic flow is forwarded correctly between VNFs and all network functions are able to communicate effectively;

Use case: UC5-UC7, UC8

Preconditions: SDN controller must be connected to the carrier Ethernet switch through OpenFlow protocol and be the manager for Open vSwitch;

Test equipment: Ping tool is used to test the connection between the IPs in the same VLAN;

Flow tables in Open vSwitch for each assigned internal VLAN is displayed in the CLI;

Page 145: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 145 of 192 Version: 1.0

Flow tables in the carrier Ethernet switch are checked in the Network Element Director interface;

Test procedure: 1. Start NFV server

2. Start carrier Ethernet switch

3. Connect SDN controller to carrier Ethernet switch and open vSwicth

4. dumpl flow tables showing VLAN forwarding on both switches

5. Check L2/VLAN connectivity by issuing ping commands

Pass-fail criteria:

Pass: Appropriate flow table entries are present in the dump flow results from the two switches; Ping commands are able to reach their targets;

Fail: otherwise

Observation:

The VLAN configuration shown in Figure 43 was verified. Ping command were successful, and additional throughput performance tests were conducted as part of the final demonstration experiments documented in section 4.

Page 146: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 146 of 192 Version: 1.0

9.2 vEPC functionality

This test case aims at reporting the set of individual validations performed to attain the instantiation of two different EPC implementations: commercial Amari EPC and CTTC LENA emulator. In both EPC implementation the control and user plane are included within separated VMs (as VNFs) allocated at the UAG’s NFV server. It is worth stressing that this shows the appealing feature of designed UAG architecture enabling network sharing feature. That is, different network operators/providers can deploy their own VNFs being hosted within the same physical entity (i.e., UAG NFV server).

9.2.1 Test case vEPC_ TC_1

Description of the item under test

Validation of virtual EPC functionality.

Test report:

Pass:

Fail:

Test Setup: A Wi-Fi access point and an eNodeB are connected to the UAG in order to provide access to users through both networks, LTE and Wi-Fi. There are two pieces of user equipment, a mobile phone and a laptop. Both have a SIM card installed.

Purpose: Ensure that the underlying software infrastructure is available for the UAG, uAUT, uDPM tests.

Use case: UC1, UC2, UC4, UC5, UC7, UC8

Preconditions: A Fedora Linux virtual machine is installed in a hypervisor. The virtual machine has two network adapters: one for the S1, one for the SGi EPC interface. SCTP communication is supported by the networking hardware. An eNodeB is configured and ready connect to the EPC virtual machine's S1 interface, 36412 SCTP and 2152 UDP ports. The User Equipment has an USIM card whose secret key is also loaded into the virtual EPC's HSS.

Test equipment: NFV server. Gigabyte Mini PC running the eNodeB software. National Instruments USRP N210 radio head connected to the mini PC via an Ethernet cable. The mini PC is connected to the NFV server via an Ethernet connection. The NFV server is connected to the Internet. A laptop computer acting as a UE. An USB LTE modem. A test USIM card inserted into the USB LTE modem.

Test procedure: The NFV server is started up. The virtual machine is started up. The Amari EPC software is started up. The Mini PC is started up. The USRP radio head is started up. The Amari ENB software is started on the mini PC. The Amari ENB performs the S1Setup procedure: the ENB-EPC connection is established. The UE laptop is started up. The USIM card is inserted into the USB LTE modem.

Page 147: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 147 of 192 Version: 1.0

The USM LTE modem is plugged into the UE laptop. The UE laptop performs the LTE Attach procedure and a default bearer is assigned to the UE. The UE starts a web browser, and downloads a page from a HTTP server.

Pass-fail criteria:

Pass: The requested HTTP page appears on the UE screen. Fail: Otherwise

Observation:

The functionality of the virtual EPC could be validated, and the requested HTTP appeared on the UE screen. A detailed decription of the vEPC functionality is contained in section 4.

Page 148: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 148 of 192 Version: 1.0

9.2.2 Test case vEPC_TC_2

Description of the item under test

Validation of virtual EPC functionality of a second EPC operator.

Test report:

Pass:

Fail:

Test Setup: The vRAN implementation of the CTTC LENA emulator is connected to the vEPC implementation of the CTTC LENA emulator running in the NFV server co-located with the UAG.

MME

SGWInternet

Host(Server

Application)

eNB

CTTC LENA vRAN(in a laptop)

CTTC LENA vEPC(in the NFV server)

S1-U

S1-MME

SGi

LTE modemClientApplication

PGW

Purpose: The purpose is to have a second instantiation of a vEPC on top of the NFV server for multi-operator demonstration purposes, i.e. network sharing UAG capability

Use case: UC1, UC6 and UC8

Preconditions: An Ubuntu virtual machine is installed in the NFV server. This virtual machine implements the CTTC LENA vEPC functionality. It has two network adapters: one for the S1 interface and another for the SGi interface.

Test equipment:

NFV server with the vEPC VM. A laptop acting as the vRAN connected to the vEPC running in the NFV server. Another laptop acting as the Internet host connected to the SGi interface of the CTTC LENA vEPC.

Wireshark protocol analyser connected to the S1 and SGi interfaces.

Page 149: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 149 of 192 Version: 1.0

Test procedure:

The NFV server is started up. The vEPC VM is started up. The vRAN is started up. The eNB is connected to the vEPC using the S1 interface (S1-MME for CP and S1-U for DP). The UE is attached to the eNB and the default bearer and some dedicated bearers are established. The Internet host launch a video server and the UE connects to the Internet host. Data flows between the Internet host and the UE.

Pass-fail criteria:

The test is considered valid when control signalling data pass through the S1-MME interface and user data pass through the S1-U and SGi interfaces.

Observation:

The vEPC is deployed in a separated VM within the UAG’s NFV server. The following figure shows the Horizon Dashboard of the NFV server with the VM containing the vEPC and the two subnetworks: one connected to the vRAN through the S1 interface and the other one to the Internet Host through the SGi interface:

The vRAN is started in a laptop. The eNodeB is attached to the vEPC. The UE is attached to the eNB and the bearers are established. The Internet host launches a video server and the UE connects to the Internet host. Data flows between the Internet host and the UE.

The following figure shows the Wireshark pcap traces of the registration of the eNodeB in the SGW and some data packets:

Page 150: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 150 of 192 Version: 1.0

Page 151: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 151 of 192 Version: 1.0

Page 152: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 152 of 192 Version: 1.0

Page 153: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 153 of 192 Version: 1.0

The following figure shows the data rate of the video received by the UE (between 1 and 2 Mb/s):

Page 154: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 154 of 192 Version: 1.0

Page 155: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 155 of 192 Version: 1.0

9.3 uAUT functionality

This section describes the tests done in order to validate the uAUT functionality. Moreover, it shows the results obtained during these tests.

For the uAUT functionality we have defined three test cases that cover the validation of the different development phases and the final deployment validation with an integration test case.

uAUT_TC_2 and uAUT_TC_3 are the tests that validate the different development phases. They have been done in order to validate the solution step by step. On the other hand, uAUT_TC_1 has been done during the final demonstration and it validates the correct integration of the uAUT and the mobile network in the UAG.

9.3.1 Test case uAUT_ TC_1

The test case proposed in this section describes the complete demonstration of the uAUT functionality. This means that it tests the operation of the integration between the authentication in the mobile network and the authentication in the Wi-Fi network.

This test case was performed during the final demonstration of the project in Lannion.

Description of the item under test

Validation of the integration between the authentication in the mobile network and the Wi-Fi network

Test report:

Pass:

Fail:

Test Setup: A Wi-Fi access point and an eNodeB are connected to the UAG in order to provide access to users through both networks, LTE and Wi-Fi. There are two pieces of user equipment, a mobile phone and a laptop. Both have a SIM card installed.

The Wi-Fi access point sends the authentication query to the uAUT entity that is virtualized and runs in the UAG. The uAUT is in charge of redirect the authentication query to the Radius server. On the other hand, the EPC of the mobile network is also virtualized and runs in the UAG.

The hardware setup that was deployed is explained in section 3.3.1 of this document. The laptop is used in order to take measures in the LTE network.

Purpose: User equipment can authenticate with the same credentials and in a seamless way to networks LTE and Wi-Fi employing Hotspot 2.0, the credentials stored in the SIM card and an EAP procedure. Moreover, the uAUT, the virtualization of the EPC and the integration of both entities in the UAG should be validated in this test case.

Use case: UC01, UC06

Page 156: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 156 of 192 Version: 1.0

Preconditions:

The uAUT is connected to the Radius server of the Wi-Fi network and acts as a proxy redirecting the authentication requests. The mobile phone has the Hotspot 2.0 policy provisioned. Both pieces of UE have a SIM card installed on it.

On the other hand, the EPC is virtualized and integrated in the UAG.

Test equipment: Wireshark protocol analyser, Wi-Fi monitoring interface, Logs of the EPC, pieces of UE and Radius server.

Test procedure: Mobile phone: after powering it on and with the LTE network working, it authenticates in the LTE network. After the successful attach procedure, the AP is powered on. The UE connects to the Wi-Fi network automatically without any intervention of the user.

Pass-fail criteria:

The mobile phone must authenticate in the LTE network and in the Wi-Fi network with the SIM credentials. Both authentications must be seamless to the user.

Observation:

First of all, the next figure shows the traces of the authentication of the UE in the mobile network.

In the figure we can see the attach procedure for LTE. The identity request sent by the network to the UE, the identity response in which it includes the IMSI and all the authentication procedure.

In the Wi-Fi network, we have used EAP-AKA procedure and Hotspot 2.0 technology, but the operation of these ones is validated in the next two test cases. In this test case we want to validate the integrated operation with the uAUT. Because of that, the next

Page 157: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 157 of 192 Version: 1.0

figure shows some traces captured in the uAUT during an authentication process in the Wi-Fi network.

We have to focus on the traces inside the red box. 10.100.120.106 is the access point. 10.10.10.7 is the IP address of uAUT and the erased IP address is the Radius server. Its IP address must be deleted as it is a public exposed server and its address cannot be shown for security reasons.

If we follow the traces step by step we can see that the AP sends the access request message to the uAUT. This entity sends it to the Radius server. When the uAUT receives the response from the Radius server, it redirects it to the access point. It continues in the same way until the access accept message. So, as it can be seen, the uAUT acts as a proxy.

In order to evaluate the viability of the architecture in terms of quality of service, during the demonstration, with all the entities integrated in the UAG, we have taken some performance measurements.

First of all, we show the throughput measured between the UE and the Common GW in the LTE network. A good throughput here means that we can provide a service with a good QoS to the final user:

- Measuring throughput betwwen the UE and the Common Gateway

- aitia@benndeb:~$ iperf3 -c 10.10.10.1

Connecting to host 10.10.10.1, port 5201

[ 4] local 10.100.110.104 port 58004 connected to 10.10.10.1 port 5201

[ ID] Interval Transfer Bandwidth Retr Cwnd

[ 4] 0.00-1.00 sec 1.55 MBytes 13.0 Mbits/sec 0 103 KBytes

[ 4] 1.00-2.00 sec 3.34 MBytes 28.1 Mbits/sec 0 242 KBytes

[ 4] 2.00-3.00 sec 3.60 MBytes 30.2 Mbits/sec 0 411 KBytes

[ 4] 3.00-4.00 sec 3.90 MBytes 32.7 Mbits/sec 0 621 KBytes

[ 4] 4.00-5.00 sec 3.10 MBytes 26.0 Mbits/sec 2 523 KBytes

[ 4] 5.00-6.00 sec 3.59 MBytes 30.1 Mbits/sec 2 280 KBytes

[ 4] 6.00-7.00 sec 3.09 MBytes 26.0 Mbits/sec 1 219 KBytes

[ 4] 7.00-8.00 sec 3.24 MBytes 27.2 Mbits/sec 0 233 KBytes

[ 4] 8.00-9.00 sec 3.64 MBytes 30.5 Mbits/sec 0 240 KBytes

[ 4] 9.00-10.00 sec 3.23 MBytes 27.1 Mbits/sec 0 240 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval Transfer Bandwidth Retr

[ 4] 0.00-10.00 sec 32.3 MBytes 27.1 Mbits/sec 5 sender

[ 4] 0.00-10.00 sec 31.5 MBytes 26.4 Mbits/sec receiver

iperf Done.

- aitia@benndeb:~$ iperf3 -c 10.10.10.1 -R

Page 158: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 158 of 192 Version: 1.0

Connecting to host 10.10.10.1, port 5201

Reverse mode, remote host 10.10.10.1 is sending

[ 4] local 10.100.110.104 port 58009 connected to 10.10.10.1 port 5201

[ ID] Interval Transfer Bandwidth

[ 4] 0.00-1.00 sec 2.27 MBytes 19.0 Mbits/sec

[ 4] 1.00-2.00 sec 6.05 MBytes 50.7 Mbits/sec

[ 4] 2.00-3.00 sec 6.89 MBytes 57.8 Mbits/sec

[ 4] 3.00-4.00 sec 6.67 MBytes 55.9 Mbits/sec

[ 4] 4.00-5.00 sec 7.24 MBytes 60.8 Mbits/sec

[ 4] 5.00-6.00 sec 7.20 MBytes 60.4 Mbits/sec

[ 4] 6.00-7.00 sec 7.24 MBytes 60.8 Mbits/sec

[ 4] 7.00-8.00 sec 7.20 MBytes 60.4 Mbits/sec

[ 4] 8.00-9.00 sec 7.24 MBytes 60.7 Mbits/sec

[ 4] 9.00-10.00 sec 7.24 MBytes 60.8 Mbits/sec

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval Transfer Bandwidth Retr

[ 4] 0.00-10.00 sec 68.7 MBytes 57.7 Mbits/sec 20 sender

[ 4] 0.00-10.00 sec 65.7 MBytes 55.1 Mbits/sec receiver

iperf Done.

Another interesting measurement is the throughput between the eNodeB and the Common GW. With this value we can estimate how many users we can serve with this eNodeB:

- Measuring link speed from the mini PC ENB to the Common Gateway

- [root@gig aitia]# iperf3 -c 10.10.10.1

Connecting to host 10.10.10.1, port 5201

[ 4] local 172.17.110.2 port 37692 connected to 10.10.10.1 port 5201

[ ID] Interval Transfer Bandwidth Retr Cwnd

[ 4] 0.00-1.00 sec 92.0 MBytes 772 Mbits/sec 1 573 KBytes

[ 4] 1.00-2.00 sec 90.3 MBytes 757 Mbits/sec 8 107 KBytes

[ 4] 2.00-3.00 sec 68.6 MBytes 576 Mbits/sec 10 140 KBytes

[ 4] 3.00-4.00 sec 88.1 MBytes 739 Mbits/sec 4 211 KBytes

[ 4] 4.00-5.00 sec 75.2 MBytes 631 Mbits/sec 10 52.3 KBytes

[ 4] 5.00-6.00 sec 63.0 MBytes 529 Mbits/sec 8 199 KBytes

[ 4] 6.00-7.00 sec 84.5 MBytes 709 Mbits/sec 4 124 KBytes

[ 4] 7.00-8.00 sec 78.1 MBytes 655 Mbits/sec 4 116 KBytes

[ 4] 8.00-9.00 sec 86.0 MBytes 721 Mbits/sec 4 147 KBytes

[ 4] 9.00-10.00 sec 81.5 MBytes 683 Mbits/sec 4 212 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval Transfer Bandwidth Retr

[ 4] 0.00-10.00 sec 807 MBytes 677 Mbits/sec 57 sender

[ 4] 0.00-10.00 sec 806 MBytes 676 Mbits/sec receiver

iperf Done.

- [root@gig aitia]# iperf3 -c 10.10.10.1 -R

Connecting to host 10.10.10.1, port 5201

Reverse mode, remote host 10.10.10.1 is sending

[ 4] local 172.17.110.2 port 37694 connected to 10.10.10.1 port 5201

[ ID] Interval Transfer Bandwidth

[ 4] 0.00-1.00 sec 95.0 MBytes 796 Mbits/sec

[ 4] 1.00-2.00 sec 99.9 MBytes 838 Mbits/sec

[ 4] 2.00-3.00 sec 97.4 MBytes 817 Mbits/sec

[ 4] 3.00-4.00 sec 82.7 MBytes 694 Mbits/sec

[ 4] 4.00-5.00 sec 94.8 MBytes 795 Mbits/sec

[ 4] 5.00-6.00 sec 95.2 MBytes 799 Mbits/sec

[ 4] 6.00-7.00 sec 100 MBytes 842 Mbits/sec

[ 4] 7.00-8.00 sec 92.8 MBytes 779 Mbits/sec

[ 4] 8.00-9.00 sec 85.3 MBytes 715 Mbits/sec

[ 4] 9.00-10.00 sec 87.0 MBytes 729 Mbits/sec

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval Transfer Bandwidth Retr

[ 4] 0.00-10.00 sec 932 MBytes 781 Mbits/sec 212 sender

[ 4] 0.00-10.00 sec 931 MBytes 781 Mbits/sec receiver

iperf Done.

The same measurement done in the Wi-Fi network between the UE and the gateway:

Page 159: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 159 of 192 Version: 1.0

------------------------------------------------------------

Client connecting to 10.100.120.250, TCP port 5001

TCP window size: 512 KByte (default)

------------------------------------------------------------

[ 4] local 10.100.120.103 port 43764 connected with 10.100.120.250 port 5001

[ ID] Interval Transfer Bandwidth

[ 4] 0.0-10.0 sec 86.2 MBytes 72.1 Mbits/sec

------------------------------------------------------------

Client connecting to 10.100.120.250, TCP port 5001

TCP window size: 512 KByte (default)

------------------------------------------------------------

[ 4] local 10.100.120.103 port 43765 connected with 10.100.120.250 port 5001

[ ID] Interval Transfer Bandwidth

[ 4]------------------------------------------------------------

Client connecting to 10.100.120.250, TCP port 5001

TCP window size: 512 KByte (default)

------------------------------------------------------------

[ 4] local 10.100.120.103 port 43766 connected with 10.100.120.250 port 5001

[ ID] Interval Transfer Bandwidth

[ 4] 0.0-10.0 sec 81.8 MBytes 68.5 Mbits/sec

------------------------------------------------------------

Server listening on TCP port 5001

TCP window size: 2.00 MByte (default)

------------------------------------------------------------

------------------------------------------------------------

Client connecting to 10.100.120.250, TCP port 5001

TCP window size: 512 KByte (default)

------------------------------------------------------------

[ 13] local 10.100.120.103 port 43767 connected with 10.100.120.250 port 5001

Waiting for server threads to complete. Interrupt again to force quit.

[ ID] Interval Transfer Bandwidth

[ 13] 0.0-10.1 sec 76.1 MBytes 63.5 Mbits/sec

Page 160: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 160 of 192 Version: 1.0

9.3.2 Test case uAUT_TC_2

Description of the item under test

Validation of the EAP authentication in the Wi-Fi network with the credentials of the SIM card.

Test report:

Pass:

Fail:

Test Setup: A Wi-Fi access point is connected to the Internet with access to the Radius server. The Radius server has been configured in order to support EAP-SIM and EAP-AKA authentication. The UE has a SIM card installed.

This test has been made in a scenario where the Wi-Fi access point is directly connect through a LAN network to the Radius server. The used hardware setup is described in the next figure:

Purpose: Authenticate the user equipment in the Wi-Fi network only with the credentials stored in the SIM card. The main purpose of this test is to validate the configuration of the Radius server.

Use case: UC1, UC6

Preconditions: The AP is correctly configured in order to send the authentication requests to the Radius server. The Radius server has access to the mobile network's Authentication Center (AuC) to facilitate the EAP procedure.

In this case, the SIM card installed supports EAP-AKA.

Test equipment:

Wireshark protocol analyser

Wi-Fi monitoring interface

Test procedure:

After turning on all devices, the user tries to authenticate in the Wi-Fi network with an EAP procedure using the SIM credentials.

Pass-fail criteria:

This test passes when the UE is authenticated in the Wi-Fi network with EAP-SIM or EAP-AKA procedure, the one supported by the SIM

Page 161: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 161 of 192 Version: 1.0

card. In order to assure that the authentication has worked apart from the Wireshark traces, the logs of the Radius server must be consulted.

Observation:

The next figure shows the Wireshark traces that show EAP-AKA messages exchanged between the UE and the AP.

Moreover, below the log of the Radius server can be seen. It has been anonymized for security reasons. Anyway, below the Access-Request message and the Access-Accept message can be seen.

*** Received from X.X.X.X port 58405 .... Code: Access-Request

Identifier: 3

Authentic: |<175><20><153>|<226>,<149><23>2FKUe<178><203>

Attributes:

User-Name = "[email protected]"

NAS-Identifier = ""

Called-Station-Id = "C4-71-30-42-95-36:HS20_R1_Combo_Test"

NAS-Port-Type = Wireless-802.11

NAS-Port = 1

Calling-Station-Id = "EC-88-92-6E-D4-3E"

Connect-Info = "CONNECT 54Mbps 802.11g"

Acct-Session-Id = "56F17994-00000001"

Unknown-0-186 = <0><15><172><4>

Unknown-0-187 = <0><15><172><4>

Unknown-0-188 = <0><15><172><1>

Framed-MTU = 1400

EAP-Message = <2><255><0>8<1>[email protected]

Message-Authenticator = <30>]<246><166><206>Ic<206>F<200>L<29><145><132><166><230>

Event-Timestamp = 1459267467

*** Sending to X.X.X.X port 58405 ....

Code: Access-Accept

Identifier: 5

Authentic: <7>5<230><153><251><189><157><142><1>K8<23><155><18>E<232>

Attributes:

MS-MPPE-Send-Key =

<<191><203><141><155>S<203><135><5>q~<140><219>4<212><232>J5Y<220><<185><20><225><200>K<135><2

54><218>7^>

MS-MPPE-Recv-Key =

XBF<128>4<182><221><245>j<212>gZQJ<197><164>h<203><162><200><194><178><3><170>s<138><230><163>

h<234><226>;

EAP-Message = <3><1><0><4>

Message-Authenticator = <0><0><0><0><0><0><0><0><0><0><0><0><0><0><0><0>

Page 162: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 162 of 192 Version: 1.0

9.3.3 Test case uAUT_ TC_3

Description of the item under test

Evaluation of the correct operation of the Hotspot 2.0 technology.

Test report:

Pass:

Fail:

Test Setup: A Wi-Fi access point is connected to the Radius server. This AP supports Hotspot 2.0. The UE also supports Hotspot 2.0 technology. As in the previous test case, this one has been made in a friendly scenario. The hardware setup is shown in the next figure:

Purpose: Authenticate the user equipment in the Wi-Fi network using Hotspot 2.0 technology, i.e. following the information contained in the policy that has been provisioned in the UE.

Use case: Seamless authentication with the same credentials in LTE and Wi-Fi network.

Preconditions: The policy provisioned on the UE corresponds with the profile of the AP.

Test equipment:

Wireshark protocol analyser, Wi-Fi monitoring interface, log of the UE.

Test procedure:

After turning on all the devices the UE must associate to and authenticates in the Wi-Fi network automatically, i.e. without the intervention of the user following the information stored in the policy.

Pass-fail criteria:

The UE must authenticate in the Wi-Fi network. Moreover, in order for this test to pass, Wireshark traces need to contain the exchanged ANQP messages.

Observation:

The next figure shows the traces captured in the air interface and that contain the ANQP messages exchanged between the UE and the Wi-Fi AP:

Page 163: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 163 of 192 Version: 1.0

Inside the first red box you can see two messages. First, the ANQP Query message sent by the UE to the Wi-Fi AP asking for information in order to compare it with the Hotspot 2.0 profile the UE has provisioned. Second, the ANQP Response that contains the information that the Wi-Fi AP sends to the UE. As an example, inside the second red box you can see one of the elements of the ANQP Response, the NAI Realm of the AP that, in our case, is fon.com.

The used UE is a mobile phone. In order to validate the correct attachment to the Wi-Fi network we have, also, the log of the mobile phone. We can read it below: 04-21 12:16:06.985 861 1316 D HS20 : HSNwk: 'NetworkInfo{mSSID='HS20_R1_Combo_Test', mHESSID=0, mBSSID=c47130429536,

mStationCount=0, mChannelUtilization=0, mCapacity=0, mAnt=FreePublic, mInternet=true, mVenueGroup=Residential,

mVenueType=PrivateResidence, mHSRelease=R1, mAnqpDomainID=0, mAnqpOICount=0, mRoamingConsortiums=223344}

04-21 12:16:06.991 861 1316 D HS20 : match nwk 'HS20_R1_Combo_Test':c47130429536, anqp present, query true, home sps: 1

04-21 12:16:06.994 861 1316 D HS20 : 'HS20_R1_Combo_Test':c47130429536 match on fon.com: HomeProvider, auth RealmMethod

04-21 12:16:06.994 861 1316 D HS20 : -- fon.com: match HomeProvider, queried false

04-21 12:16:06.994 861 1316 D HS20 : HS20_R1_Combo_Test pass 1 matches: fon.com→HomeProvider

04-21 12:16:06.996 861 1316 D ScanDetailCache: Visiblity by passpoint match returned 2.4 GHz BSSID of c4:71:30:42:95:36

04-21 12:16:06.998 861 1316 D WifiStateMachine: shouldSwitchNetwork txSuccessRate=0.00 rxSuccessRate=0.00 delta 1000 ->

1000

04-21 12:16:07.015 861 1316 D WifiStateMachine: CMD_AUTO_CONNECT sup state ScanState my state DisconnectedState nid=0

roam=3

04-21 12:16:07.015 861 1316 E WifiConfigStore: saveWifiConfigBSSID Setting BSSID for fon.comWPA_EAP to any

04-21 12:16:07.028 861 1316 D WifiStateMachine: CMD_AUTO_CONNECT will save config -> "HS20_R1_Combo_Test" nid=0

04-21 12:16:07.040 861 1316 D WifiConfigStore: created a homeSP object for 0:"HS20_R1_Combo_Test"

04-21 12:16:07.046 861 8951 D HS20 : HS20 profile for fon.com already exists

04-21 12:16:07.052 861 1316 D WifiStateMachine: CMD_AUTO_CONNECT did save config -> nid=0

04-21 12:16:07.054 861 1316 D WifiConfigStore: Setting SSID for 0 toHS20_R1_Combo_Test

04-21 12:16:07.056 8781 8781 I wpa_supplicant: wlan0: Trying to associate with SSID 'HS20_R1_Combo_Test'

04-21 12:16:07.467 8781 8781 I wpa_supplicant: wlan0: Associated with c4:71:30:42:95:36

04-21 12:16:07.475 8781 8781 I wpa_supplicant: wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started

04-21 12:16:07.534 8781 8781 I wpa_supplicant: wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=23

04-21 12:16:07.534 8781 8781 I wpa_supplicant: wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 23 (AKA) selected

04-21 12:16:07.601 8781 8781 I wpa_supplicant: wlan0: CTRL-REQ-SIM-0:UMTS-

AUTH:3fa6695debd546627f90eadf94ee40f9:41c9f5b9d1da61df756a3911e24b199b needed for SSID HS20_R1_Combo_Test

04-21 12:16:07.603 861 1316 D WifiStateMachine: Received SUP_REQUEST_SIM_AUTH

04-21 12:16:07.603 861 1316 D WifiStateMachine: id matches targetWifiConfiguration

04-21 12:16:07.964 861 1316 V WifiStateMachine: Raw Response - 3A4Tvav7jNbMuimB++J/MQ==

04-21 12:16:07.964 861 1316 E WifiStateMachine: Hex Response - dc0e13bdabfb8cd6ccba2981fbe27f31

04-21 12:16:07.964 861 1316 E WifiStateMachine: synchronisation failure

04-21 12:16:07.964 861 1316 V WifiStateMachine: auts:13bdabfb8cd6ccba2981fbe27f31

04-21 12:16:07.965 861 1316 V WifiStateMachine: Supplicant Response -:13bdabfb8cd6ccba2981fbe27f31

04-21 12:16:07.965 8781 8781 W wpa_supplicant: EAP-AKA: UMTS authentication failed (AUTN seq# -> AUTS)

04-21 12:16:08.033 8781 8781 I wpa_supplicant: wlan0: CTRL-REQ-SIM-0:UMTS-

AUTH:d04841eb96efd32d22fb3615118d5a09:fe841ad0e54061dfb7e03be074dd00a5 needed for SSID HS20_R1_Combo_Test

04-21 12:16:08.034 861 1316 D WifiStateMachine: Received SUP_REQUEST_SIM_AUTH

04-21 12:16:08.034 861 1316 D WifiStateMachine: id matches targetWifiConfiguration

04-21 12:16:08.393 861 1316 V WifiStateMachine: Raw Response -

2wjFKMFQO4uklxAHyyi/HOtQy7V0txdLYh5pEO9B6BjkGt95e8vlRTGgfjoIJjWS9YIz7+E=

04-21 12:16:08.394 861 1316 E WifiStateMachine: Hex Response -

db08c528c1503b8ba4971007cb28bf1ceb50cbb574b7174b621e6910ef41e818e41adf797bcbe54531a07e3a08263592f58233efe1

04-21 12:16:08.395 861 1316 V WifiStateMachine: successful 3G authentication

04-21 12:16:08.395 861 1316 V WifiStateMachine: ik:ef41e818e41adf797bcbe54531a07e3ack:07cb28bf1ceb50cbb574b7174b621e69

res:c528c1503b8ba497

04-21 12:16:08.395 861 1316 V WifiStateMachine: Supplicant Response -

:ef41e818e41adf797bcbe54531a07e3a:07cb28bf1ceb50cbb574b7174b621e69:c528c1503b8ba497

04-21 12:16:08.449 8781 8781 I wpa_supplicant: wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully

Page 164: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 164 of 192 Version: 1.0

04-21 12:16:08.465 8781 8781 I wpa_supplicant: wlan0: WPA: Key negotiation completed with c4:71:30:42:95:36 [PTK=CCMP

GTK=CCMP]

04-21 12:16:08.465 8781 8781 I wpa_supplicant: wlan0: CTRL-EVENT-CONNECTED - Connection to c4:71:30:42:95:36 completed [id=0

id_str=fon.com]

04-21 12:16:08.472 861 1316 D ConnectivityService: registerNetworkAgent NetworkAgentInfo{ ni{[type: WIFI[], state:

CONNECTING/CONNECTING, reason: (unspecified), extra: "HS20_R1_Combo_Test", roaming: false, failover: false, isAvailable:

true]} network{109} lp{{LinkAddresses: [] Routes: [] DnsAddresses: [] Domains: null MTU: 0 TcpBufferSizes:

524288,2097152,4194304,262144,524288,1048576}} nc{[ Transports: WIFI Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED&NOT_VPN

LinkUpBandwidth>=1048576Kbps LinkDnBandwidth>=1048576Kbps]} Score{20} everValidated{false} lastValidated{false}

created{false} lingering{false} explicitlySelected{false} acceptUnvalidated{false} everCaptivePortalDetected{false}

lastCaptivePortalDetected{false} }

04-21 12:16:08.472 861 1316 E WifiConfigStore: saveWifiConfigBSSID Setting BSSID for fon.comWPA_EAP to any

04-21 12:16:08.472 861 1318 D ConnectivityService: NetworkAgentInfo [WIFI () - 109] EVENT_NETWORK_INFO_CHANGED, going from

null to CONNECTING

04-21 12:16:08.486 861 1316 E WifiConfigStore: saveWifiConfigBSSID Setting BSSID for fon.comWPA_EAP to any

04-21 12:16:08.495 369 859 D CommandListener: Setting iface cfg

04-21 12:16:08.498 861 1316 D WifiStateMachine: Start Dhcp Watchdog 5

04-21 12:16:08.509 861 1316 D ScanDetailCache: Visiblity by passpoint match returned 2.4 GHz BSSID of c4:71:30:42:95:36

04-21 12:16:08.511 861 1318 D ConnectivityService: notifyType CAP_CHANGED for NetworkAgentInfo [WIFI () - 109]

04-21 12:16:08.511 861 1318 D ConnectivityService: updateNetworkScore for NetworkAgentInfo [WIFI () - 109] to 60

04-21 12:16:08.513 861 1316 D IpReachabilityMonitor: watch: iface{wlan0/5}, v{1}, ntable=[]

04-21 12:16:08.573 861 8960 D DhcpClient: Receive thread started

04-21 12:16:08.654 861 1316 E native : do suspend false

04-21 12:16:08.663 861 8959 D DhcpClient: Broadcasting DHCPDISCOVER

04-21 12:16:10.984 861 8959 D DhcpClient: Broadcasting DHCPDISCOVER

04-21 12:16:10.991 861 8960 D DhcpClient: Received packet: ec:88:92:6e:d4:3e OFFER, ip /10.100.120.103, mask

/255.255.255.0, DNS servers: /8.8.8.8 , gateways [/10.100.120.1] lease time 600, domain null

04-21 12:16:10.992 861 8959 D DhcpClient: Got pending lease: IP address 10.100.120.103/24 Gateway 10.100.120.1 DNS

servers: [ 8.8.8.8 ] Domains DHCP server /10.100.120.1 Vendor info null lease 600 seconds

04-21 12:16:10.993 861 8959 D DhcpClient: Broadcasting DHCPREQUEST ciaddr=0.0.0.0 request=10.100.120.103

serverid=10.100.120.1

04-21 12:16:11.025 861 8960 D DhcpClient: Received packet: ec:88:92:6e:d4:3e ACK: your new IP /10.100.120.103, netmask

/255.255.255.0, gateways [/10.100.120.1] DNS servers: /8.8.8.8 , lease time 600

04-21 12:16:11.026 861 8959 D DhcpClient: Confirmed lease: IP address 10.100.120.103/24 Gateway 10.100.120.1 DNS servers:

[ 8.8.8.8 ] Domains DHCP server /10.100.120.1 Vendor info null lease 600 seconds

04-21 12:16:11.037 369 859 D CommandListener: Setting iface cfg

04-21 12:16:11.046 861 1316 D IpReachabilityMonitor: watch: iface{wlan0/5}, v{2}, ntable=[]

04-21 12:16:11.053 861 8959 D DhcpClient: Scheduling renewal in 299s

First of all, inside the red box, the discovery of the Hotspot 2.0 access point can be seen. Following the log, we can see that the mobile phone decides to connect automatically to the network as the information sent by the access point matches with the one that it has in the profile. After that, what we see is a normal attachment to a Wi-Fi network. It finalizes with the DHCP process.

Finally, the next figure shows a screenshot of the mobile phone:

Inside the red box the attached network can be seen. The SSID is HS20_Combo_uAUT. It is the one that identifies the Wi-Fi network deployed for these tests. Below the SSID another line can be read. It indicates the name of the provider. This name is defined in an element of Hotspot 2.0 profile that is configured in the

Page 165: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 165 of 192 Version: 1.0

access point. This element is called friendly name and in this case is “Fon Technology S.L.”.

Page 166: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 166 of 192 Version: 1.0

9.4 uDPM functionality

The test cases for uDPM includes two parts, where the first consists of three tests for handover between Wi-Fi and LTE, and the second one test the MPE functionality.

9.4.1 Test case uDPM_TC_1

Description of the item under test

Test Wi-Fi to LTE handover.

Test report:

Pass:

Fail:

Test Setup: Two UEs, two Wi-Fi networks, an LTE network, the Decision Engine and MPTCP Content Server (see details in Section 3.4).

Purpose: Validate the Wi-Fi to LTE handover in reaction to degradation of the current connection.

Use case: Network controlled offload/smooth handover.

Preconditions: The two UEs are connected to the same Wi-Fi network. For both the Wi-Fi interface is set up as primary path, and the LTE interface as backup path. Moreover the 2nd Wi-Fi network is disabled by disconnecting the AP’s uplink.

Test equipment: Status changes can be monitored in the logs of the Decision Engine and on the GUI running on the clients.

Test procedure: Start video streaming from the Content Server with UE 1. After some time, start a file download with UE 2 to generate background traffic which saturates the Wi-Fi link.

Pass-fail criteria: UE 1 must change to the LTE network after UE 2 starts generating the background traffic. The video streaming on UE1 must not be interrupted by the change.

Observation:

When the DE detects the bottlenecked Wi-Fi connection it moves one of the UEs to a different access network. Since the 2nd Wi-Fi network is disabled its only option is the LTE one. In the figures below the DE selected UE 1 to be moved to the LTE network. The decision engine instructs the UE to change and the UE performs the change by setting the Wi-Fi interface to MPTCP backup and the LTE one to MPTCP primary. From this point on the traffic uses the LTE interface. The plot below also shows the point when the background traffic stops and the DE moves UE 1 back to the Wi-Fi network.

Page 167: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 167 of 192 Version: 1.0

Page 168: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 168 of 192 Version: 1.0

9.4.2 Test case uDPM_TC_2

Description of the item under test

Test Wi-Fi to Wi-Fi handover while LTE connectivity is not available.

Test report:

Pass:

Fail:

Test Setup: Two UEs, two Wi-Fi networks, an LTE network, the Decision Engine and MPTCP Content Server (see details in Section 3.4).

Purpose: Validate the Wi-Fi to Wi-Fi handover (while LTE is not available) in reaction to degradation of the current connection.

Use case: Network controlled offload/smooth handover.

Preconditions: The two UEs are connected to the same Wi-Fi network. For both the Wi-Fi interface is set up as primary path, and the LTE interface is disconnected. The LTE network is disabled.

Test equipment: Status changes can be monitored in the logs of the Decision Engine and on the GUI running on the clients.

Test procedure: Start video streaming from the Content Server with UE 1. After some time start a file download with UE 2 to generate background traffic which saturates the Wi-Fi link.

Pass-fail criteria: UE 1 must change to the 2nd Wi-Fi network after UE 2 starts generating the background traffic. The video streaming on UE 1 should pause for a few seconds, since the buffer underruns.

Observation:

When the DE detects the bottlenecked Wi-Fi connection it moves one of the UEs to a different access network. Since the LTE network is not available in the current setup its only option is the 2nd Wi-Fi network. In the figures below the DE selected UE 1 to be moved to the LTE network. The decision engine instructs the UE to change and the UE performs the change by disconnecting from the 1st Wi-Fi network and then connecting to the 2nd. For a brief duration UE 1 is not connected to any network. MPTCP is not able to help here, since the LTE interface is not connected.

Page 169: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 169 of 192 Version: 1.0

Page 170: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 170 of 192 Version: 1.0

9.4.3 Test case uDPM_TC_3

Description of the item under test

Test Wi-Fi to Wi-Fi handover while LTE connectivity is available as well.

Test report:

Pass:

Fail:

Test Setup: Two UEs, two Wi-Fi networks, an LTE network, the Decision Engine and MPTCP Content Server (see details in Section 3.4).

Purpose: Validate the Wi-Fi to Wi-Fi handover (while LTE is available) in reaction to degradation of the current connection.

Use case: Network controlled offload/smooth handover.

Preconditions: The two UEs are connected to the same Wi-Fi network. For both the Wi-Fi interface is set up as primary path, and the LTE interface as backup path. All three access networks are available.

Test equipment: Status changes can be monitored in the logs of the Decision Engine and on the GUI running on the clients.

Test procedure: Start video streaming from the Content Server with UE 1. After some time start a file download with UE 2 to generate background traffic which saturates the Wi-Fi link.

Pass-fail criteria: UE 1 must change to the 2nd Wi-Fi network after UE 2 starts the background traffic. While UE1 is changing from one Wi-Fi network to the other, the traffic should flow on the LTE link. The video streaming on UE1 must not be interrupted by the change.

Observation:

When the DE detects the bottlenecked Wi-Fi connection it moves one of the UEs to a different access network. In this scenario both the 2nd Wi-Fi network and the LTE network is suitable for this, however the DE is configured to prefer Wi-Fi over LTE, so the DE chooses the 2nd Wi-Fi network for UE 1. The decision engine instructs the UE to change and the UE performs the change by disconnecting from the 1st Wi-Fi network and then connecting to the 2nd. While the Wi-Fi interface is disconnected the LTE interface remains connected (and set as MPTCP backup path), which allows data to be transmitted over it. This way the LTE connection is used while the Wi-Fi interface is temporarily not associated to any network. The plot below also shows the point when the background traffic stops and the DE moves UE 1 back to the Wi-Fi network.

Page 171: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 171 of 192 Version: 1.0

Page 172: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 172 of 192 Version: 1.0

9.4.4 Test case uDPM_TC_4

Description of the item under test

Test MPE with dual connectivity of UE to UAG via both Wi-Fi and LTE.

Test report:

Pass:

Fail:

Test Setup: A host computer (UE) is connected to the UAG via both Wi-Fi and LTE. The two connections can be used simultaneous for traffic connecting the UE to the Internet, and the settings are controlled from the MPE at the UAG.

Purpose: The MPE is a part of the uDPM responsible for selecting the traffic paths between the uDPM and the UE. This setup shows an implementation of such MPE.

Use case: Multi-Path connectivity to UE

Preconditions: The UE has preinstalled client software for communication with the MPE in the uDPM.

Test equipment: The traffic of the connections are controlled from the MPE at the uDPM

Test procedure: 1. Repeated calls are made from the UE to a content server (CS). The CS answers with short text messages. Each call corresponds to a TCP session, individually routed according to the settings in the MPE.

2. Similarly, calls are made from the CS to the UE, simulating e.g. a web server at the UE. The traffic is routed according to the settings in the MPE.

Pass-fail criteria: The traffic should be routed according to the settings in the MPE. The settings should be manageable in runtime at the MPE.

Observation:

The setup shows that the traffic flows between the UAG and the UE can be balanced between the LTE and the Wi-Fi connections. The loading is implemented as an application of the routing and firewall features in Linux. The setup also shows that the UE point of presence seen by the CVS is the IP address in the MPE.

Page 173: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 173 of 192 Version: 1.0

9.5 Caching functionality

9.5.1 Test case Caching_TC_1

Description of the item under test

Test the converged caching functionality enabled by CacheAP

in access network and NGPoP in aggregation network

Test report:

Pass:

Fail:

Test Setup: CacheAPs with Wi-Fi and LTE interface, eNodeB, the machine

hosting cache node in NGPOP connecting to UGA, DE, and the VM

hosting cache controller. In addition two laptops serve as UEs, and a

content server providing HLS video streaming service

Purpose: After Alice requests a video, a second user, Bob, retrieves the same

video directly from CacheAP that has been cached the video.

Use case: Converged caching solution.

Preconditions: After the requests by UE1, the requested content of UE1 should be

cached in the connected CacheAP and Cache Server in NGPOP

Test

equipment:

GUI running on the cache controller. TC is running on content server

for the traffic shaping to limit the network bottleneck. Debug

information and video quality are also shown by screen.

Test

procedure:

1. The network bottleneck is configured by TC as 2.5 Mb/s

2. Alice connects to CacheAP and watches video with 3.4 Mb/s

Page 174: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 174 of 192 Version: 1.0

3. Bob arrives and connects to CacheAP, and requests the same video that Alice watched.

Pass-fail

criteria:

Bob should retrieve the content from CacheAP, and the QoS of the

video services is quite good

Observation:

Alice gets a low QoS with long startup delay and screen frozen because the network

bottleneck between Alice and content server. However since the video chunks have

already been cached in CacheAP, Bob gets a good QoS even the network bandwidth

2.5 Mbps cannot support a good service for playing a video with 3.4 Mbps.

Page 175: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 175 of 192 Version: 1.0

9.5.2 Test case Caching_TC_2

Description of the item under test

Test the prefetch functionality that is controlled by CC and

initiated by DE

Test report:

Pass:

Fail:

Test Setup: CacheAPs with Wi-Fi and LTE interface, eNodeB, the machine hosting

cache node in NGPOP connecting to UGA, DE, and the VM hosting

cache controller. In addition, two laptops serve as UEs, and a content

server providing HLS video streaming service.

Purpose: Prefetch the content requested by UE to a future connected CacheAP

predicted by CC and DE.

Use case: Converged prefetching solution.

Precondition

s:

The trigger of the prefetch is made by DE that detects a bad network

connection on UE.

Test

equipment:

GUI running on the cache controller. Debug information and video

quality are also shown by screen

Test

procedure:

1. Alice connects to CacheAP connecting to the mobile network by CacheAP_MN and watches video

2. Alice arrives home and DE detects Alice disconnecting from CacheAP_MN and decides to switch Alice to CacheAP_FN to the home network connecting to fixed network;

Page 176: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 176 of 192 Version: 1.0

3. DE informs CC about this.

4. CC asks CacheAP_FN to prefetch Alice’s requested video chunks.

5. When Alice disconnects from CacheAP_MN and connects to CacheAP_FN, video chunks have been cached.

Pass-fail

criteria:

The requested video is prefetched on the CacheAP_FN controlled by

CC and DE.

Alice could watch the video continuously during the switch

Observation:

The screen print shows a handover success from CacheAP_MN to CacheAP_FN when DE detectes network deterioration of Alice. DE manages to ask Alice to make the network handover and informs CC.

Page 177: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 177 of 192 Version: 1.0

Then we can see that the prefetching has successfully been started by CC when DE informs CC about the user switching.

Since the requested video has been prefetched in CacheAP_FN, Alice can

continuously watch the video without the impact of network handover. The evidence

for this result has been shown in the recorded video for the caching demo from our

integrated demonstration in Lannion in France on April 2016.

Page 178: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 178 of 192 Version: 1.0

9.6 Centralized NG-PoP architecture unitary tests

This set of tests focuses on detailing the different tests and validations that need to be conducted in the context of centralized NG-POP approach explored in COMBO. The targeted tests are divided / sorted according to the involved plane (i.e., control and data or user). The basic objective of all the planned tests is to demonstrate a candidate implementation of the centralized NG-POP solution, where fixed and mobile (EPC) network functions (at both layers) are located at the UAG close to the core CO reference point (see section 3.2). The architecture relies on a SDN orchestrator to configure automatically and dynamically a multi-layer (packet and optical) aggregation network to accommodate mobile and fixed services from the access networks towards the UAG at the core CO. The request of establishing, modifying and releasing the (fixed and mobile) services is done via A-CPI to the SDN orchestrator. In other words, we have defined an API that requests transport resources for backhauling services.

The logical representation of the experimental platform used to conduct the selected COMBO centralized NG-POP approach demonstration is depicted in Figure 45:

Figure 45 Logical view of the setup: MLN aggregation backhaul network between LTE eNodeBs and the UAG’s vEPC @ centralized NG-POP

This setup is deployed on top of the CTTC ADRENALINE testbed located at Barcelona (Spain). It is form of different network domains and technologies:

LTE UE and eNodeBs are emulated and running in a compute node (server) using the LENA emulator

Packet (MPLS) domains connected to the RAN and UAG physical element

Optical (WDM or WSON) domain connected to the packet domains.

MME Video Server

UE UE

WSON Aggregation

MPLS OFP MPLS OFP

OFP packet core (MPLS)OFP packet access (MPLS)

PCE

Topology ServerProvisioning Manager

ABNO Controller

Flow Server

ABNO

Ryu (SDN) Controller

OpenFlow

NBI Ryu (SDN) Controller

OpenFlow

NBI AS PCE

TED

Fixed Service App Mobile Service App

IP Edge (vEPC)

Rest API

cNG-POP / Hosting VNFs

Page 179: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 179 of 192 Version: 1.0

The UAG for this specific setup is deployed in a compute node. In this server it is deployed the vEPC using the implementation provided by LENA emulator.

Last but not least, both video client and servers used for the final demonstration are also run in different servers (see section 0).

Figure 46 shows the physical view of the above setup rolled out in the ADRENALINE testbed.

Figure 46 Physical view of the setup deployed in the CTTC ADRENALINE testbed (Barcelona, Spain)

The service requests are done via the LTE/EPC developed in CTTC LENA emulator. That is, whenever a new EPS Bearer is created, the multi-layer aggregation network is automatically configured by the SDN ABNO orchestrator satisfying the service requirements. The service call is sent from the vEPC MME to the ABNO via a defined REST API. On top of the ABNO orchestrator (fixed and mobile) services applications are running demanding service establishment.

The design of the service call / request contains the following attributes:

Endpoint IP addressing: aEnd; IP of eNodeB / RGW and zEnd; IP of SGW/PGW / BNG

Bandwidth (Bytes/s); for mobile derived from Max and/or GBR

9

WDM nodes + AS PCE

MPLS nodes

- Emulated eNB+ Video Client

- EPC + Video Server

SDN Orchestrator

SDN OFP Packet Controllers

Page 180: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 180 of 192 Version: 1.0

Latency (ms); for mobile derived from EPS Bearer QCI

eNodeB TEID; GTP TEIDs (For mobile)

Thus, the general objective is to validate at both control and data/user planes the orchestration of end-to-end services within an aggregation network. In particular, we concentrated on the dynamic establishment of packet and optical tunnels accommodating EPS bearers from the RAN to the corresponding GW at the centralized NG-POP.

In the centralized NG-POP we have defined two groups of tests aligned with the two planes constituting the whole setup:

Data/User plane: Those tests are labelled by the test code cNGPOP_DP.

Control plane: Those tests are labelled by the test code cNGPOP_CP.

The tests under cNGPOP_DP aim at actually validating that the user plane traffic of the mobile services are conveniently transported between the two endpoints (i.e., eNB and GW at the UAG). This takes into account the configuration of the involved network elements providing the backhauling of the mobile services along with the compliant construction of the mobile GTP flows on top of the MPLS label switched paths (LSPs) over the optical tunnels. In this regard, two data plane tests are defined:

cNGPOP_DP_1: Transport of LTE Bearer over MPLS/WSON aggregation network.

cNGPOP_DP_2: Validation of the MPLS-TP and WSON configuration.

The tests under cNGPOP_CP address basically all the control processes, functions, protocols, interfaces, etc. defined to enable the automatic provisioning of end-to-end services over the aggregation network. In this regard, three control plane tests are defined:

cNGPOP_CP_1: Coordination via ABNO orchestrator of the EPC and aggregation network.

cNGPOP_CP_2: Measurement of the time required to configure the aggregation network to backhauling the EPS Bearer.

cNGPOP_CP_3: Use of CTTC ABNO GUI

Page 181: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 181 of 192 Version: 1.0

9.6.1 Test case cNGPOP_DP_1

Description of the item under test

Validate transport of EPS Bearer over MPLS/WSON

Test report:

Pass:

Fail:

Test Setup: LTE/EPC implementation on the CTTC LENA emulator interconnected (at both control and data plane level) to the CTTC ADRENALINE multi-layer aggregation network.

Purpose: The purpose is to transport data traffic for mobile services, i.e. S1-U interface (GTP), over MPLS packets tunnels and those over optical connections.

Use case: UC1 and UC6

Preconditions: The network is completely available, i.e. no other connections are established. Additionally, no network element configuration has been pre-configured. The configuration (push and pop MPLS tags) is automatically performed at the MPLS border nodes connected to both RAN (eNodeB) and UAG.

Test equipment: Wireshark protocol analyser

Test procedure: For a UE, it is created an EPS bearer between the eNodeB and the UAG where the EPC gateways (SGw and PGw) are located. The creation of the EPS bearer triggers the configuration of the aggregation network (MPLS packet nodes and optical switches) to enable the backhauling.

Pass-fail criteria: The test is considered valid since user traffic (GTP-U packets) is received at both endpoints of the LTE/EPC infrastructure (i.e., eNodeB and SGW/PGW) and it is checked that the MPLS labels are added and stripped at the MPLS packet nodes adequately. This is observed that the GTP-U traffic is not altered. In particular a video application is used as real data traffic being backhauled over the described setup.

Observation:

The validation of this multi-layer transport is performed confirming that GTP-U data traffic exchanged between eNodeB and vEPC SGw is encapsulated with MPLS connections.

Page 182: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 182 of 192 Version: 1.0

GTP-U over MPLS

Page 183: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 183 of 192 Version: 1.0

9.6.2 Test case cNGPOP_DP_2

Description of the item under test

Validation of the MPLS-TP and WSON configuration

Test report:

Pass:

Fail:

Test Setup: LTE/EPC implementation on the CTTC LENA emulator interconnected (at both control and data plane level) to the CTTC multi-layer aggregation network.

Purpose: A unified SDN orchestrator is the responsible configure the multi-layer aggregation network. This infrastructure is composed of multiple domains combining two switching technologies, namely, packet and optical. The aim is to validate the configuration via the SouthBound Interfaces (i.e., OpenFlow and PCEP protocols) of the heterogeneous network elements that are used to finally accommodate the EPS Bearer. The following scheme depicts the summarized configuration representing uniquely the border MPLS packet nodes (connected to the eNodeB and SGw) that require to be configured at the time of setting up the bidirectional MPLS traffic flows carrying the EPS Bearer (GTP-U):):

Use case: UC1 and UC6

Preconditions: The triggering of the EPS bearer is made by the vEPC MME using the NorthBound/A-CPI Interface to the SDN orchestrator. To this end, it is used a REST API which provides the sufficient information (EPS Bearer unambiguously identifiers, service requirements, endpoints, etc.) to trigger the backhaul configuration.

Test equipment:

Wireshark protocol analyser

10.1.6.107DatapathID: 257

Port1

Port2

MPLS Switch

MPLS Pkt Label: 1020

Pop Label: 1020Output port 2

10.1.6.107DatapathID: 257

Port1

Port2

MPLS Switch

Push Label: 1021Output port 1

eNB1 10.1.6.106DatapathID: 258

Port1

Port2

MPLS Switch

srcIP: 10.0.0.1 (SGw)dstIP: 10.0.0.101 (eNB1)TEID: 2

Push Label : 1020Output port 2

10.1.6.106DatapathID: 258

Port1

Port2

MPLS Switch

srcIP: 10.0.0.101 (eNB1)dstIP: 10.0.0.1 (SGW)TEID: 2

Pop Label: 1021Output port 2

SGW

srcIP: 10.0.0.1 (SGw)dstIP: 10.0.0.101 (eNB1)TEID: 2

srcIP: 10.0.0.101 (eNB1)dstIP: 10.0.0.1 (SGW)TEID: 2

MPLS Pkt Label: 1021

Page 184: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 184 of 192 Version: 1.0

Test procedure:

The number of eNodeBs and UEs per eNB is defined. For each UE, it is created an EPS Bearer between the eNodeB and the S-GW/P-GW. The creation of the EPS bearer triggers the configuration of the aggregation network.

Pass-fail criteria:

The test is validated checking that the computed MPLS and optical nodes are configured to establish the end-to-end connection supporting the EPS Bearer.

Observation:

The following protocol analyser capture shows the OpenFlow-controlled MPLS switches configuration which includes:

The (extended) OpenFlow rules tailored for the mobile (GTP-U) data traffic allow determining the matching per flow by the tuple formed by: input port, eNodeB and SGw IPv4 addressed and the TEIDs.

For the (mobile) flows matching the above tuple, the action is forward to a given output port and to push/pop the MPLS tags depending on the MPLS packet nodes location.

The following wireshark captures shows an example of how configuring the MPLS nodes using the OpenFlow Flow_Mod messages (version 1.3). In the main body of the message there are two fields: match and instruction. Match specifies the criteria used to filter a flow and actions determine what to do with that packet.

Page 185: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 185 of 192 Version: 1.0

OpenFlow Extension

Page 186: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 186 of 192 Version: 1.0

9.6.3 Test case cNGPOP_CP_1

Description of the item under test

ABNO – EPC coordination

Test report:

Pass:

Fail:

Test Setup: LTE/EPC implementation on the CTTC LENA emulator interconnected (at both control and data plane level) to the CTTC ADRENALINE network.

Purpose: An SDN orchestrator (ABNO) coordinates the control and configuration commands being triggered by the EPC MME element. To this end, it is first computed the path (by the PCE entity) which uses the topology manager to retrieve the connectivity of the whole network. Next, the ABNO provisioning manager coordinates the establishment of the different segments of the end-to-end connections with the controllers of the involved domains.

Use case: UC1 and UC6

Preconditions: The EPS Bearer is negotiated between the eNodeB and the MME using the regular LTE/EPC process defined in 3GPP. When it is established the MME communicates with the SDN orchestrator via the defined NorthBound Interface based on a REST API.

Page 187: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 187 of 192 Version: 1.0

Test equipment:

Wireshark protocol analyser

Test procedure:

For the considered setup it is established a new EPS Bearer. In the MME it is configured a wireshark protocol analyser to capture the messages sent to the SDN orchestrator. Furthermore, it is also triggered other wireshark capturing processes into the modules forming the SDN orchestrator to check the sequence of the messages to configure the aggregation network. The following pictures shows the workflow used for the test:

Pass-fail criteria:

The test is considered valid when for the triggered EPS bearers the data GTP-U packets are received and the SGW/PGW and forwarded to a destination IP address of the remote video server

Observation:

The wireshark of the generated messages exchanged among the different elements forming the ABNO and the underlying controllers (for packet and optical network elements) are captured in the following picture.

MMEMobile

Service AppABNO

Controller

Fixed Service App

REST API

REST API

Page 188: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 188 of 192 Version: 1.0

The contents of the service call (step 1 in the previous figure) are shown in the following figure. Specifically it is specified:

Egress and ingress packet MPLS nodes (aEnd and zEnd) connected to the eNodeB and EPC SGw,

Requested QoS requirements (trafficParams such as throughput and latency),

Match rules:

o EPS Bearer details (for one direction either upstream or downstream) in terms of eNodeB and SGW addressing (ipv4Src and ipv4Dst), and the TEID (experimentalTeid).

Page 189: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 189 of 192 Version: 1.0

9.6.4 Test case cNGPOP_CP_2

Description of the item under test

Measurement of the Configuration time for provisioning mobile EPS services

Test report:

Pass:

Fail:

Test Setup: LTE/EPC implementation on the CTTC LENA emulator interconnected (at both control and data plane level) to the CTTC ADRENALINE network and cloud testbed.

Purpose: The objective is to measure the required time for the complete system (involving control and data plane processes) to provision an end-to-end mobile service from the scratch. In other words, the idea is to roughly obtain the amount of time needed by requesting the aggregation configuration to transport an EPS Bearer being negotiated.

Use case: UC1 and UC6

Preconditions: The triggering of the EPS bearer is made by the EPC MME using REST-based A-CPI to the unified SDN orchestration system.

Test equipment: Wireshark protocol analyser

Test procedure: Request a new EPS bearer. Trigger the wireshark at the endpoints and check when the GTP-U flows are received. This roughly provides an idea of the required service provisioning time which encompasses a number of interactions between the EPC and the ABNO controller, the configuration of the network elements, etc.

Pass-fail criteria: It is checked that the configuration of the data plane is actually performed and is measured the required time from the reception of the request to transport a new EPS Bearer and the time that mobile data is really delivered between the mobile connection endpoints: eNodeB and EPC.

Observation:

According to the performed validation tests, the whole process for configuring aggregation network involving packet nodes, optical switches and optical transceivers at the borders between packet and optical domains takes around 2-3 seconds. Most of this time is due to the required time of the available DWDM transceivers. For the sake of clarification, this time should not be considered as benchmarking. Indeed, at the control / orchestration level, the computation and configuration of the whole network

Page 190: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 190 of 192 Version: 1.0

equipment is achieved in the order of hundreds of ms. However, as mentioned the obtained delay is caused by the available hardware at CTTC labs.

Page 191: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 191 of 192 Version: 1.0

9.6.5 Test case cNGPOP_CP_3

Description of the item under test

Use of CTTC ANBO GUI

Test report:

Pass:

Fail:

Test Setup: LTE/EPC implementation on the CTTC LENA emulator interconnected (at both control and data plane level) to the CTTC ADRENALINE network and cloud testbed.

Purpose: The purpose is to use the CTTC ABNO GUI to show all the configuration intricacies required at the time of establishing a new EPS Bearer over the multi-layer aggregation network connecting the RAN and the EPC elements at the UAG

Use case: UC1 and UC6

Preconditions: The triggering of the EPS bearer is made by the EPC MME via REST-based A-CPI to the SDN orchestrator.

Test equipment:

The implemented CTTC ABNO GUI is shown in the next figure.

In this figure, it is represented the view of the topology retrieved by the SDN orchestrator and stored in the topology manager. So, the GUI is just a visual representation of the vision that the ABNO controller has from the underlying transport infrastructure.

Page 192: COMBO Deliverable D6.3 - Report describing results of

Grant Agreement N°: 317762

D6.3 – Report describing results of operator testing, capturing lessons learned and recommendations

Doc. ID: COMBO_D6.3_WP6_2016-07-22_ADVA_v1.0.docx Page 192 of 192 Version: 1.0

Test procedure:

An EPS bearer is requested and it is observed the different commands, and the how the network topology is before the establishment of the EPS bearer and afterwards (i.e., the EPS Bearer is successfully established).

Pass-fail criteria:

It is validated that in the topology view shown by the GUI that when a new EPS Bearer is established, the packet domains are interconnected by a new virtual packet link created on top of the optical domain. In other words, as EPS Bearers are established if this causes modifications in the aggregation network topology, such changes are reflected in the represented vision provided by the ABNO GUI.

Observation:

This GUI was used during the Lannion demonstration day to show some features of the centralized NG-POP demonstration such as the initial topology discovery of the aggregation network by the SDN orchestrator and the visual configuration of the involved network elements after requesting and setting up a new EPS Bearer. Specifically, the GUI was running in a server located at CTTC premises. Recall that the whole demonstration of the centralized NG-POP approach was remotely conducted. Thus the GUI provided a very useful and visual tool detail and showcase the intricacies (e.g., the number of configurations made by the SDN orchestrator) behind the centralized NG-POP approach.

- - - End of Document - - -