scalable mobile backhauling with information-centric networking luca muscariello orange labs...

31
Scalable Mobile Backhauling with Information-Centric Networking Luca Muscariello Orange Labs Networks Network Modeling and Planning and IRT SystemX. Joint work with G. Carofiglio, M. Gallo, D. Perino, Bell Labs, Alcatel-Lucent

Upload: karin-davis

Post on 22-Dec-2015

217 views

Category:

Documents


3 download

TRANSCRIPT

Scalable Mobile Backhaulingwith Information-Centric Networking

Luca Muscariello Orange Labs NetworksNetwork Modeling and Planningand IRT SystemX.

Joint work with G. Carofiglio, M. Gallo, D. Perino, Bell Labs, Alcatel-Lucent

motivation trends

Content-centric nature of Internet usage highlights inefficiencies of the host-centric transport model

Higher costs in mobile infrastructure to sustain traffic growth with no innovation at network layer

Reduced margins for MNOs (…ok in Europe!)

ISP countermeasures Quest for novel business opportunities in service delivery

value chain Increased network control to lower costs: network cost

optimization is constrained to the ‘Traffic Engineering Triangle’

outline

mobile backhaul opportunitiesevaluation scenario and results introducing ICN in today’s mobile backhaul

outline

mobile backhaul opportunitiesevaluation scenario and results introducing ICN in today’s mobile backhaul

5

objective: need for innovative network solutions to cope with huge mobile traffic growth with no significant capacity upgrades tool: real traffic observations from our network and joint BL/OL

experimental campaign over ~100 nodes with real workload/topology achievements: our ICN design provides a content-aware network substrate in the mobile backhaul, compatible with 3GPP standard

WHEREscalable mobile backhaul with ICN

6

WHERE

We focus on HTTP transactions of the following predominant applicationsIn one peak hour for a set of macro cells covering a metro area. web browsing audio/video You Tube

‒ cacheability: % of requests of objects requested at least twice in a given time period.

‒ In average 52% of total requests are cacheable

‒ Audio/video applications and You Tube in particular can attain values up to 86%

traffic observations in the backhaul

outline

mobile backhaul opportunitiesevaluation scenario and results introducing ICN in today’s mobile backhaul

outline

mobile backhaul opportunitiesevaluation scenario and results introducing ICN in today’s mobile backhaul

Methodology

We need to experiment with the full stack of protocols

– CS/PIT/FIB– caching, queuing – flow-control, congestion control,

Realistic experiments– realistic workload

Repeatable experiments– control your 100% of your experiment – run and monitor it continuously

Lurch

A newly designed protocol need to be tested Event driven simulation:

limited in the number of events (hence topology size) computation is hard to parallelize

Large scale experiments: Complex to manage

We needed a test orchestrator

From protocol design to large scale experimentation

Lurch

Lurch is a test orchestrator for CCNx1 (soon CCN-lite and NFD) Simplify and automate ICN’s protocol testing over a

list of interconnected servers (i.e. G5K). Lurch run on a separate machine and control the

test

Controller

Lurch

Application

Control Plane

Virtualized Data Plane

Managem

ent

CCNx

TCP/UDP

Virtualized IP

IP layer

PHY layer

Data PlaneProtocol stack

Architecture Lurch controller:

Virtualized Data plane Control Plane Application layer

Lurch

Lurch

Create virtual interfaces between nodes (i.e. G5K) Bash configuration file computed remotely by the orchestrator and transfered

to experiment nodes Network iptunnels to build virtualized interfaces One physical interface (eth0), multiple virtual interfaces (tap0,..,)

Topology management

#!/bin/bash sysctl -w net.ipv4.ip_forward=1modprobe ipip

iptunnel add tap0 mode ipip local 172.16.49.50 remote 172.16.49.5ifconfig tap0 10.0.0.2 netmask 255.255.255.255 uproute add 10.0.0.1 tap0

iptunnel add tap1 mode ipip local 172.16.49.50 remote 172.16.49.51ifconfig tap1 10.0.0.3 netmask 255.255.255.255 uproute add 10.0.0.4 tap1

1.2.3.4.5.

6.7.

8.

9.10.

tap0

tap1

eth0 eth0 eth0

172.16.9.50 172.16.49.5172.16.49.51

10.0.0.2

10.0.0.3

tap010.0.0.1

tap010.0.0.4

Controller

Virt

ual

Phy

sica

l

Lurch

Remotely assign network resources to nodes preserving physical bandwidth constraints

Bash configuration file computed remotely by the orchestrator and transferred to experiment nodes

Traffic Control Linux tool to limit bandwidth, add delay, packet loss, etc..

Resource management

#!/bin/bash tc qdisc del dev eth0 | cut -d " " -f 1) roottc qdisc add dev eth0 | cut -d " " -f 1) root handle 1: htb default 1

tc class add dev eth0 | cut -d " " -f 1) parent 1: classid 1:1 htb rate 100.0mbit ceil 10.0mbittc filter add dev eth0 | cut -d " " -f 1) parent 1: prio 1 protocol ip u32 match ip dst 172.16.49.5 flowid 1:1

tc class add dev eth0 | cut -d " " -f 1) parent 1: classid 1:2 htb rate 100.0mbit ceil 50.0mbit

tc filter add dev eth0 | cut -d " " -f 1) parent 1: prio 1 protocol ip u32 match ip dst 172.16.49.51 flowid 1:2

1.2.3.

4.5.

6.

7.

8.9.

10Mbps

Controller

Virt

ual

Phy

sica

l

50Mbps

1Gbps

Lurch

Remotely control name-based forwarding tables Bash configuration file computed remotely by the orchestrator and transferred

to experiment nodes CCNx’s FIB control command ccndc

Name-based control plane

#!/bin/bash

ccndc add ccnx:/music UDP 10.0.0.1

ccndc add ccnx:/video UDP 10.0.0.4

1.2.3.4.5.

Name prefix face

ccnx:/music 0

ccnx:/video 1

FIB

ccnx:/music

Controller

Virt

ual

Phy

sica

l

ccnx:/video

Lurch

Remotely control experiment workload File download application started according experiment’s needs

Arrival process: Poisson,CBR… File popularity: Zipf, Weibull, et.. Trace driven

Application Workload

Two ways: Centralize workload generation at the

controller Delegated workload generation to clients

for performance improvement

tap0

tap1

eth0 eth0 eth0

172.16.9.50 172.16.49.5172.16.49.51

10.0.0.2

10.0.0.3

tap010.0.0.1

tap010.0.0.4

Controller

Virt

ual

Phy

sica

l

Lurch

Remotely control experiment statistic’s Bash start/stop commands sent remotely

CCNx’s statistics (e.g. caching, forwarding) through logs top / vmstat monitoring active processes CPU usage (e.g. ccnd) Ifstat monitoring link rate

Measurements

At the end of the experiment statistics are collected and transferred to the user

tap0

tap1

eth0 eth0 eth0

172.16.9.50 172.16.49.5172.16.49.51

10.0.0.2

10.0.0.3

tap010.0.0.1

tap010.0.0.4

Virt

ual

Phy

sica

l

Controller

EXPERIMENTS

Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform

Experiments

Large topologies Up to 100 physical nodes More than 200 links

Realistic scenarios Mobile Backhaul

21

WHERE

A down-scaled model of a backhaul network. 4 “regional” PDN GWs connected by a full mesh SGWs are assumed to be co-located with the PDN-GW 2 CDN servers external to the backhaul, reached via two PDN-GWs each PDN-GW is the root of a fat tree topology composed of 20 nodes eNodeBs aggregate traffic generated by three adjacent cells every eNodeB serves the same average traffic demand

network topology

22

WHERE

Software:-We used an ICN prototype (http://www.ccnx.org)-with optimized distributed congestion control and multipath

forwarding mechanisms (Carofiglio et al. IEEE ICNP 2013), based on

decompositionLagrangian multipliers with physical meaning:

- network latency (measured in CCN/NDN by request/reply)- network node flow rate unbalance (registered in the pending

request table)-LRU data replacement, cache along the path (dumb caching).

Experimental Testbed:-On the Grid 5000-Bootable customized kernels with our network prototype-Lurch: our network experiment orchestrator (i.e. statistics collection,

etc. ).Workload:

-Down-scaling of the traffic characterization obtained from Orange traces

-Requests are aggregated at macro cell level

methodology

23

the platform

24

WHERE

we compare at equal cache budget–Baseline

–Traffic is routed through a single shortest path.–ICN

–ICN transport, multi-path forwarding and LRU caching –PDNCache

–Caches are deployed at PDN GWs only. –Traffic is routed through a single shortest path.

–eNodeBCache–Caches are deployed at eNodeBs only. –Traffic is routed through a single shortest path.

–ICN + PDNCache

evaluated solutions

25

results – latency reduction WHERE

ICN shows the better QoE in terms of delivery timeImproved user QoE due to:

in-network caching.dynamic multipath transfer.

―a factor 3 reduction in average delivery time

26

ICN sensibly decreases bandwidth utilisation inside the mobile backhaul w.r.t. alternative solutions, allowing potential cost reduction

in the backhaul from outside the backhaul

–up to 40% bandwidth savings in backhaul.

WHEREresults – bandwidth savings

27

results - enhancing network flexibility WHERE

We emulate a flash crowd phenomenon on a link and compare the link load over time for ICN and for the baseline scenario without caching:

ICN link load and average delivery time are almost not impacted by the flash crowd (in virtue of transport/caching interplay and multipath).

outline

mobile backhaul opportunitiesevaluation scenario and results introducing ICN in today’s mobile backhaul

29

integrating ICN in today’s backhaul WHERE

ICN HEADER INTRODUCTIONTwo alternatives:1. in GTP-U encapsulation 2. After IP (IPsec) header with a specific protocol value

ICN DATA DELIVERY PROCESSTwo alternatives:a) ICN proxy co-located with eNodeB (with DPI)b) HTTP plugin at end-user

POLICY-CHARGINGEvery node sends periodical reports to control plane elements via ad-hoc GTP-C functions about traffic statistics

conclusion and current work

ICN allows to remove anchoring to manage mobility Mobility is not a technical problem Communication is connection-less Multi-path, multi-homing, multi-cast are native In-network caching is native and outperforms PoP caching

Currently high-speed prototype at Alcatel-Lucent (40Gbps)

Ongoing discussion on ALU 7750 edge router…

Demonstrations:– Common demonstration at Bell Labs Future X Days in September

2014– Demonstration at ACM SIGCOMM ICN 2014 to be held in Paris,

September 24-26

Questions

1. G. Carofiglio, M. Gallo, L. Muscariello, Bandwidth and storage sharing performance in information-centric networking, in ACM SIGCOMM ICN 2011 workshop, Toronto, Canada.

2. G. Carofiglio, M. Gallo, L. Muscariello, D.Perino, Modeling data transfer in content-centric networking, in Proc. of 23rd International Teletraffic Congress, ITC23 San Francisco, CA, USA,  2011

3. G. Carofiglio, M. Gallo, L. Muscariello, ICP: design and evaluation of an Interest control Protocol for Content-Centric networks, IEEE INFOCOM NOMEN WORKSHOP, Orlando, USA, March 2012

4. G. Carofiglio, M. Gallo, L. Muscariello, Joint Hop-by-Hop and Receiver Driven Interest control Protocol for content-Centric Networks, in ACM SIGCOMM workshop on information-centric networking, Helsinki, Finland, 2012, awarded as best paper .

5. G. Carofiglio,  M. Gallo, L. Muscariello, On the Performance of Bandwidth and Storage Sharing in Information-Centric Networks, Elsevier Computer Networks, 2013.

6. G. Carofiglio, M. Gallo, L. Muscariello, D. Perino,Evaluating per-application storage management in content-centric networks, Elsevier Computer Communications: Special Issue on Information-Centric Networking, 2013.

7. M. Gallo, B. Kaumann, L. Muscariello, A. Simonian, C. Tanguy, Performance Evaluation of the Random Replacement Policy for Networks of Caches, Elsevier Performance Evaluation, 2013.

8. G. Carofiglio,  M. Gallo, L. Muscariello, M. Papalini, Multipath Congestion Control in Content-Centric Networks In proc. of IEEE INFOCOM, NOMEN Workshop, Turin, Italy, April 2013.

9. G. Carofiglio, M. Gallo, L. Muscariello, M.Papalini, S. Wang Optimal Multipath Congestion Control and Request,Forwarding in Information-Centric Networks To appear in proc. of IEEE ICNP, Goettingen, Germany, October 2013.

10. White Paper in collaboration with Bell Labs, SCALABLE MOBILE BACKHAULING VIA INFORMATION CENTRIC NETWORKING. A glimpse into the benefits of an Information Centric Networking approach to data delivery., 2013

publications