optimal multipath congestion control and request forwarding in information-centric networks

10
Optimal Multipath Congestion Control and Request Forwarding in Information-Centric Networks Giovanna Carofiglio * , Massimo Gallo * , Luca Muscariello , Michele Papalini , * Sen Wang § , * * Bell Labs, Alcatel-Lucent, Orange Labs, France Telecom, University of Lugano § Tsinghua University. first.last@{alcatel-lucent.com, orange.com, usi.ch}, [email protected] Abstract—The evolution of the Internet into a distributed Information access system calls for a paradigm shift to enable an evolvable future network architecture. Information-Centric Networking (ICN) proposals rethink the communication model around named data, in contrast with the host-centric transport view of TCP/IP. Information retrieval is natively pull-based, driven by user requests, point-to-multipoint and intrinsically coupled with in-network caching. In this paper, we tackle the problem of joint multipath con- gestion control and request forwarding in ICN for the first time. We formulate it as a global optimization problem with the twofold objective of maximizing user throughput and minimizing overall network cost. We solve it via decomposition and derive a family of optimal congestion control strategies at the receiver and of distributed algorithms for dynamic request forwarding at network nodes. An experimental evaluation of our proposal is carried out in different network scenarios to assess the performance of our design and to highlight the benefits of an ICN approach. I. I NTRODUCTION The TCP/IP core of the Internet architecture has remained stable over the last decades, while accommodating unexpected innovation at lower and upper layers. Recent advances in wireless and optical networking technologies have empowered heterogeneous mobile connectivity and boosted access network capacity. The diffusion of web services, cloud and social networks has in turn driven Internet usage towards information delivery and imposed a new communication model based on Information exchange, caching and real time processing. Current information delivery solutions are deployed as over- lays on top of the existing IP network infrastructure leveraging CDNs, transparent caching etc. However, the inefficiency of the overlay approach in terms of performance guarantees/ resource utilization has been pointed out in the research community, together with question of the sustainability of such evolution model ([12], [17], [19], [10]). Some evidence has been provided that “architectural anchors” like IP addressing, host-centric point-to-point communication, inherently curb the Information- driven network evolution. A mismatch exists between the location addressing of IP and the location-agnostic content addressing by name realized by Information services (e.g. via HTTP). It calls for a paradigm shift of the Internet communication model as proposed by Information-Centric Networking (ICN) architectures ([12], [27], [16], [19], [10]). The common objective is to embed con- tent awareness into the network layer to enable efficient, mobile and connectionless information-oriented communication. The twist associated to ICN communication model lies on a small set of principles. ICN advocates a name-based communication, controlled by the receiver (pull-based), realized via name-based routing and forwarding of user requests in a point-to-multipoint fashion and supported by the presence of a pervasive network- embedded caching infrastructure. The novel transport paradigm of ICN transport holds con- siderable promises for enhanced flexibility in terms of mobility management, improved end-user performance and network resources utilization. This is due to the coupling between (i) a ‘connectionless’ communication with unknown sender(s), (ii) a unique endpoint at the receiver and (iii) dynamic request rout- ing, decided by network nodes aiming at localizing temporary copies of the content, in a hop-by-hop fashion. The definition of multipath transport control mechanisms adapted to ICN is at an early stage in the literature [8], [13] as well as that of ICN dynamic forwarding mechanisms [25], [9]. A comprehensive analysis of the joint problem of optimal congestion control and request forwarding still lacks. In this paper, we tackle the problem of a joint multipath congestion control and request forwarding for ICN. The key contributions of the paper are the following: We formulate a global optimization problem with the twofold objective of maximizing user throughput and minimizing overall network cost. We decompose it into two subproblems for congestion control and request forwarding and solve them separately. We derive a family of optimal multipath congestion control strategies to be performed at the receiver and prove the optimality of the proposal in [8]. We derive a set of optimal dynamic request forwarding strategies and design a distributed algorithm to be implemented by ICN nodes in conjunction with the proposed receiver-driven congestion control. Finally, we implement the proposed joint congestion control and request forwarding in CCNx [1] and set up a testbed for large scale experimentation. The per- formance are assessed through experiments in different network scenarios to appreciate the convergence to the optimal equilibrium, the role of in-path caching and the benefits of ICN multipath. The remainder of the paper is organized as follows. Problem statement, design goals and choices are summarized in Sec.II. The formulation of the global optimization problem is pre- sented in Sec.III, while in Sec.IV we derive the multipath congestion control solution. Sec.V focuses on the optimal forwarding strategies and the design of the distributed algo- rithm. The performance assessment by means of experiments 978-1-4799-1270-4/13/$31.00 c 2013 IEEE

Upload: ngocthanhdinh

Post on 15-Nov-2015

8 views

Category:

Documents


5 download

DESCRIPTION

Optimal Multipath Congestion Control and Request Forwarding in Information-Centric Networks

TRANSCRIPT

  • Optimal Multipath Congestion Control and RequestForwarding in Information-Centric Networks

    Giovanna Carofiglio, Massimo Gallo, Luca Muscariello, Michele Papalini, Sen Wang, Bell Labs, Alcatel-Lucent, Orange Labs, France Telecom, University of Lugano Tsinghua University.

    first.last@{alcatel-lucent.com, orange.com, usi.ch}, [email protected] evolution of the Internet into a distributed

    Information access system calls for a paradigm shift to enablean evolvable future network architecture. Information-CentricNetworking (ICN) proposals rethink the communication modelaround named data, in contrast with the host-centric transportview of TCP/IP. Information retrieval is natively pull-based,driven by user requests, point-to-multipoint and intrinsicallycoupled with in-network caching.

    In this paper, we tackle the problem of joint multipath con-gestion control and request forwarding in ICN for the first time.We formulate it as a global optimization problem with the twofoldobjective of maximizing user throughput and minimizing overallnetwork cost. We solve it via decomposition and derive a familyof optimal congestion control strategies at the receiver and ofdistributed algorithms for dynamic request forwarding at networknodes. An experimental evaluation of our proposal is carried outin different network scenarios to assess the performance of ourdesign and to highlight the benefits of an ICN approach.

    I. INTRODUCTION

    The TCP/IP core of the Internet architecture has remainedstable over the last decades, while accommodating unexpectedinnovation at lower and upper layers. Recent advances inwireless and optical networking technologies have empoweredheterogeneous mobile connectivity and boosted access networkcapacity. The diffusion of web services, cloud and socialnetworks has in turn driven Internet usage towards informationdelivery and imposed a new communication model based onInformation exchange, caching and real time processing.

    Current information delivery solutions are deployed as over-lays on top of the existing IP network infrastructure leveragingCDNs, transparent caching etc. However, the inefficiency of theoverlay approach in terms of performance guarantees/ resourceutilization has been pointed out in the research community,together with question of the sustainability of such evolutionmodel ([12], [17], [19], [10]). Some evidence has been providedthat architectural anchors like IP addressing, host-centricpoint-to-point communication, inherently curb the Information-driven network evolution.

    A mismatch exists between the location addressing of IPand the location-agnostic content addressing by name realizedby Information services (e.g. via HTTP). It calls for a paradigmshift of the Internet communication model as proposed byInformation-Centric Networking (ICN) architectures ([12],[27], [16], [19], [10]). The common objective is to embed con-tent awareness into the network layer to enable efficient, mobileand connectionless information-oriented communication. Thetwist associated to ICN communication model lies on a smallset of principles. ICN advocates a name-based communication,

    controlled by the receiver (pull-based), realized via name-basedrouting and forwarding of user requests in a point-to-multipointfashion and supported by the presence of a pervasive network-embedded caching infrastructure.

    The novel transport paradigm of ICN transport holds con-siderable promises for enhanced flexibility in terms of mobilitymanagement, improved end-user performance and networkresources utilization. This is due to the coupling between (i) aconnectionless communication with unknown sender(s), (ii) aunique endpoint at the receiver and (iii) dynamic request rout-ing, decided by network nodes aiming at localizing temporarycopies of the content, in a hop-by-hop fashion.

    The definition of multipath transport control mechanismsadapted to ICN is at an early stage in the literature [8], [13]as well as that of ICN dynamic forwarding mechanisms [25],[9]. A comprehensive analysis of the joint problem of optimalcongestion control and request forwarding still lacks.

    In this paper, we tackle the problem of a joint multipathcongestion control and request forwarding for ICN. The keycontributions of the paper are the following:

    We formulate a global optimization problem with thetwofold objective of maximizing user throughput andminimizing overall network cost. We decompose it intotwo subproblems for congestion control and requestforwarding and solve them separately.

    We derive a family of optimal multipath congestioncontrol strategies to be performed at the receiver andprove the optimality of the proposal in [8].

    We derive a set of optimal dynamic request forwardingstrategies and design a distributed algorithm to beimplemented by ICN nodes in conjunction with theproposed receiver-driven congestion control.

    Finally, we implement the proposed joint congestioncontrol and request forwarding in CCNx [1] and setup a testbed for large scale experimentation. The per-formance are assessed through experiments in differentnetwork scenarios to appreciate the convergence to theoptimal equilibrium, the role of in-path caching and thebenefits of ICN multipath.

    The remainder of the paper is organized as follows. Problemstatement, design goals and choices are summarized in Sec.II.The formulation of the global optimization problem is pre-sented in Sec.III, while in Sec.IV we derive the multipathcongestion control solution. Sec.V focuses on the optimalforwarding strategies and the design of the distributed algo-rithm. The performance assessment by means of experiments978-1-4799-1270-4/13/$31.00 c 2013 IEEE

  • is shown in Sec.VI. Finally, Sec.VII surveys related work, whileconclusions are drawn in Sec.VIII.

    II. PROBLEM STATEMENT

    ICN transport model fundamentally differs from the currentTCP/IP model. To comment on the main differences, let ussummarize the basic ICN communication principles.

    A. System description

    Our work is primarily based on CCN/NDN proposal [12],[27], though the defined mechanisms have broader applicabil-ity in the context of ICN (http://tools.ietf.org/group/irtf/trac/wiki/icnrg) and more generally of content delivery networksemploying a similar transport model (e.g. some HTTP-basedCDNs). In ICN, Information objects are split into Data packets,uniquely identified by a name, and permanently stored inone (or more) repository(ies). Users express packet requests(Interests) to trigger Data packets delivery on the reverse pathfollowed by the routed requests.

    Interests are routed by name towards one or multiple repos-itories, following one or multiple paths. Rate and congestioncontrol is performed at the end user. Intermediate nodes keeptrack of outstanding Interests in data structures called PIT(Pending Interest Table), to deliver the requested data backto the receiver on the reverse path. Each PIT entry has anassociated timer, so that all requests for the same Data, duringsuch time interval are not forwarded upstream as long as thefirst query is outstanding. In addition, nodes temporarily storeData packets in a local cache, called Content Store.

    Upon reception of an Interest packet from an input in-terface, intermediate nodes perform the following operations:(i) a Content Store lookup, to check if the requested Data islocally stored. In case of cache hit, the Data is sent throughthe interface the Interest is coming from. Otherwise, (ii) a PITlookup, to verify the existence of an entry for the same contentname. In this case, the Interest is discarded since a pendingrequest is already outstanding. If not, a new PIT entry is createdand (iii) a FIB lookup via Longest Prefix Matching returnsthe interface where to forward the Interest (selected among thepossible ones). FIB entries are associated to name prefixes (see[12]). As a consequence, Data may come from the repository,or from any intermediate cache along the path with a temporarycopy of the Data packet.

    B. Potential and challenges of ICN transport model

    As a result of the addressing-by-name principle, ICNtransport model overcomes the static binding between anobject and a location identifier: the receiver issues name-basedpacket requests over possibly multiple network interfaces withno a priori knowledge of the content source (hitting cacheor repository). The content-awareness provided by names tonetwork nodes enables a different use of buffers, not only toabsorb input/output rate unbalance but for temporary cachingof in-transit Data packets. Even without additional storagecapabilities in routers, the information access by name ofICN allows two new uses of in-network wire speed storage:(i) Reuse: subsequent requests for the same Data can beserved locally with no need to fetch data from the originalserver/repository; (ii) Repair: packet losses can be recovered

    in the network, with no need for the sender to identify andretransmit the lost packet.

    Under such assumptions, the notion of connection betweentwo endpoints, as in the TCP/IP model, is no longer required,worse, connections would prevent (i) the early reply of therequest at the first unknown hitting cache on the path or (ii) thedynamic path switching due to end-user mobility. If a first steptoward a multiparty content-oriented communication has beenmade by Swift (http://libswift.org/), ICN directly introducesa connectionless, yet stateful transport model where just thereceiver keeps a flow state associated to an ongoing content re-trieval. As a consequence, ICN simplifies mobility/connectivitydisruption management, not requiring any connection statemigration in case of end-user mobility or path failure.

    ICN transport model brings new challenges that TCP, evenin its multipath versions, can not address and, instead, moti-vates the definition of novel transport control and forwardingmechanisms. Indeed, TCP is connection-based and multipathsolutions (e.g.[23]) perform load-balancing over static paths,precomputed by a routing protocol. Instead, in ICN, not onlythe receiver, but also in-path nodes to the repository mayperform dynamic packet-by-packet request scheduling in apurely dynamic fashion by taking on-the-fly decisions aboutforwarding interfaces (in-network request scheduling). Thesecond challenge is the accrued delay variability due to in-network caching in ICN: a temporary copy of the contentmay be retrieved from caches along the path, so impactingthe variability of path length and associated delivery time.As a third challenge, in ICN there is a lack of knowledgeof available source(s), which prevents from establishing pathsat the beginning of the content retrieval, based on routinginformation like for multipath TCP.

    C. Design Goals and Choices

    The flexibility brought by the absence of predefined connec-tions, asks for a continuous probing of paths/interfaces at thereceiver and at nodes along the paths to drive rate/congestioncontrol at the receiver, as well as forwarding decisions all overthe network.

    Our main design goal is to effectively address ICN point-to-multipoint communication with no established paths, soto realize (i) efficient content delivery and network resourcesharing (bandwidth/storage) via (ii) fully distributed networkprotocols, and (iii) minimum state to be kept at nodes withno additional signalization w.r.t. what already available in thecurrent legacy CCN data plane. In the following section, weformulate the corresponding optimization problem for jointmultipath congestion control and request forwarding problemunder the following assumptions: (i) finite link bandwidth:bandwidth sharing modeled by utility-based fairness criteriabetween users; (ii) in-network caching: a flow can be termi-nated at one or multiple repositories or at intermediate networkcaches; (iii) name-based RIBs with name prefix entries alreadycomputed per name prefix and stored in the FIB.

    III. PROBLEM FORMALIZATION

    1) Network model and notation. The network is modeledas a directed graph G = (V,A) with bi-directional arcs: ifInterests of flow n traverse link (i, j), corresponding Data are

  • pulled down over link (j, i) according to ICN symmetric rout-ing principle. Links have finite capacities Cij > 0, (i, j) A.A content retrieval (also denoted as flow) n N is univocallyassociated to a user node in U V and one or multiplerepositories. The variable xnij (x

    nji) denotes the Data (Interest)

    rate of flow n over link (i, j) (j, i), in packet/s. Given the ICNlink flow balance between Interest and Data packets on thereverse link, xnij and x

    nji coincide in the fluid representation of

    flow rates, which can be interpreted as a long term average ofthe instantaneous rates. Thus, we will omit the indication ofInterest rate in the formulation of the optimization problem.For each node i V , we define +(i) and (i) respectivelythe set of egress and ingress nodes. Each network nodei V has a finite size cache to store in-transit Data. In thecurrent deterministic fluid representation of system dynamics,hn(i) {0, 1} defines the binary function equal to 1 whennode i may serve Data of flow n and 0 otherwise. As a result,we can denote by S(n) = {i : hn(i) = 1} the set of availablesources for flow n. Notation is summarized in Tab.I.

    Observation 3.1: Remark that, from a macroscopic view-point, the system evolves driven by a stochastic users demandaffecting link bandwidth and nodes storage (as analyzed in[7]), while here we focus on a microscopic flow-level descrip-tion of network resource sharing, more suitable to protocoldesign and optimization. From a macroscopic point of viewthe hn(i) become the cache hit probabilities.

    2) Optimization problem. The global objective of the optimiza-tion problem is a joint user performance maximization andnetwork cost minimization. We consider a convex users utilityfunction Un and a concave link cost function Cij , and wecompute the global objective as the difference between thetotal users utility and total network cost (1). The problemis formulated as a multi-commodity flow problem with linearconstraints and convex objective:

    maxyyy

    n

    Un(yni )

    i,jA

    Cij(ij) (1)nN

    xnij = ij (i, j) A (2)`(i)

    xn`i = yni , i, n (3)

    j+(i)xnij(1 hn(i)) = yni i / U , n (4)

    ij Cij (i, j) A (5)xnij 0 i, j, n (6)

    With ij we denote the link load as the total flow rate flowingthrough link (i, j) (2), while yni denoting flow ns rate arrivingat node i is defined as the sum of the ingress rates xn`i,` (i) (3). yni is also equal to the sum of egress ratesxnij , j +(i), unless hn(i) = 1 (hence, Interest are locallysatisfied and not forwarded upstream). This holds for all nodesi except the user node for flow n, for whom xnij = 0, j. Datarates are subject to link capacities constraints (5) and must benon negative (6).3) Problem Decomposition. The problem can be decomposedaccording to the two objectives: utility maximization of end-users throughput (at the receiver) and network cost minimiza-tion (at in-path nodes). Indeed, a primal decomposition of

    the global optimization problem is possible when fixing theauxiliary variables yni . The sub-problem assuming fixed y

    ni is

    the utility maximization of end-users throughput:maxyyy

    n

    Un(yn) (7)

    n:(i,j)L(n)yn Cij (i, j) A (8)

    yn 0 n N (9)where we denoted L(n) = {(i, j) A : xnij > 0}, the set of alllinks used by flow n. The utility maximization in (7) is Kellysformulation of distributed congestion control ([15], [20]). Wewill show in Sec.IV how to derive an optimal congestioncontroller for ICN, based on Interests transmission control atthe receiver.The master problem is the overall network cost minimization:

    minxxx

    i,jA

    Cij

    (nN

    xnij

    )(10)

    `(i)xn`i = y

    ni , i, n (11)

    j+(i)xnij(1 hn(i)) = yni i / U , n (12)

    xnij 0 (i, j) A n N (13)Network cost minimization is a multi-commodity flow prob-lem, whose flows may have multiple sources (caches or Datarepositories) and one unique receiver i U .

    Parameter DefinitionV,A,U Vertices, Arcs, Users setsN Flows (content retrievals) setL(n) = {(i, j) A : xnij > 0} Set of links used by flow nCij Capacity of links (i, j) Ahn(i) {0, 1} Binary cache function (flow n, node i)S(n) = {i : hn(i) = 1}, Set of sources for flow nxnij (x

    nji) Link Data (Interest) rate

    yn(yn), yni Total Data (Interest) rateij Link load over (i, j) A+(i) = {j V : (i, j) A} Set of egress nodes for i V(i) = {` V : (`, i) A} Set of ingress nodes for i V

    TABLE I.

    4) Solution. To solve both problems via distributed al-gorithms: (i) we compute the respective Lagrangians LUand LC ; (ii) we decompose LU and LC with respect tousers and in-path nodes respectively; (iii) we identify localLagrangians at users and in-path nodes to be respectivelymaximized/minimized. Let us consider LU and LC separately.

    LU (yyy,) =n

    Un(yn)

    ij

    ij(

    n:(i,j)L(n)yn Cij)

    =n

    Un(yn)

    n

    (i,j)L(n)

    ijyn +

    ij

    ijCij

    =n

    (Un(yn) nyn) +

    (i,j)

    ijCij

    where n (i,j)L(n) ij . Hence, every user maximizes thelocal Lagrangian LnU :

    LnU = Un(yn) nyn (14)

  • which requires to compute the Lagrange multipliers ij associ-ated to the capacity constraints, whereas multipliers associatedto constraints (9) are always null as we assume fixed positiveyn > 0 n. The utility maximization is responsible foradjusting the rates yni , which are, instead, assumed to beconstant in the network cost problem minimization. Let us thencompute LC and decompose it as a function of in-path nodeswhich are coupled by the flow balance constraint (4). Hence,LC(xxx,) =

    i,j

    Cij

    (nN

    xnij

    )n

    i

    ni (

    l(i)xnli

    j+(i)

    xnij)

    =i,j

    Cij(ij)n

    i,l

    ni xnli +

    n

    i,j

    ni xnij

    =i

    l(i)

    [Cli(li)n

    (ni nl )xnli]

    Every node i minimizes network cost by reducing a localLagrangian LCi given by

    LCi =

    l(i)[Cli(li)

    n

    (ni nl )xnli] (15)

    which requires the computation of multipliers ni . Multipliersassociated to constraints (13) are always null as we assumexnli > 0. This is due to the assumption that some probingtraffic must be present even along non optimal paths to adaptto network congestion variations.

    In the following sections, we derive (i) an optimal Interesttransmission control protocol to be performed at the receiverfrom (14) (Sec.IV) and (ii) an Interest interface selectionprotocol to be performed at network nodes from (15) (Sec.V).

    IV. RECEIVER-DRIVEN CONGESTION CONTROL

    In ICN, Data packets are pulled down by Interest packetsalong Interests reverse path. Hence, one can regulate Interesttransmission rate at the receiver, ytn, to realize optimal Datadelivery rate ynt . To derive a family of optimal controllers, letus apply a gradient algorithm to solve the utility maximizationsub-problem in (14) (See [5]). It results that:

    d

    dtij(t) = ij(t)(

    n:(i,j)L(n)

    yn(t) Cij) (16)

    d

    dtyn(t) = n(t)(U n(y

    n(t)) n(t)) (17)where ij(t) is a function of link (i, j). Let us take for instanceij(t) =

    1Cij

    , then ij(t) can be interpreted as (i, j)s linkdelay, as its evolution (16) is determined by a fluid queueevolution equation with input rate

    n:(i,j)L(n) y

    n(t) andservice rate Cij . Thus, n(t) accounts by for the total networkdelay experienced by flow n.

    Observation 4.1: Link, flow and route delays. The re-quest/reply nature of ICN communication model suggests anatural way to measure n(t) at the receiver via the Inter-est/Data response time, while ij(t) are not known and hardto measure at the receiver.

    However, as mentioned in Sec.II-B, the accrued delayvariability of ICN multipath communication due the lack of

    a priori knowledge of the sources and to the coupling with in-network caching makes it inefficient to control Interest rate bysimply monitoring the total flow delay n(t) (also noticed in[8]).

    Therefore, we decompose n(t) into the sum of routedelays, n(r, t), where a route r Rn identifies a uniquesequence of nodes from any source s S(n) to the receivernode, for flow n, n(t) =

    rRn

    n(r, t)(r, t), where (r, t)is the fraction of flow routed over route r at time t (as decidedin a distributed fashion by the request forwarding algorithm).Also,

    n(r, t) =

    (i,j)r:rRnij(t)

    The receiver has clearly no knowledge over time of the cacheserving a given Data packet. Still, it is desirable to determineavailable fast and slow routes, because such information canlocally be used to better adapt the Interest transmission strate-gies. Different routes can be distinguished by explicit labeling,as proposed in [8]. Eq.17 becomes:

    d

    dtyn(t) = n(t)(U n(y

    n(t))

    rRn(s)n(r, t)(r, t)) (18)

    Note that, if the rate decrease is proportional to a linearcombination of route delays, the rate increase is unique perflow as it depends on the overall flow rate maximization. Thechoice of the utility function U(y) specifies the form of thecontroller. In light of TCP/IP literature (cfr. [20]), we canfor instance choose an AIMD (Additive Increase MultiplicativeDecrease) controller, by selecting n(t) = K1(t)(yn(t))2 andU(y) = K2(t)/(

    n(t)yn(t)). Hence, for all flows, we derivethe following Interest transmission rate control:

    d

    dtyn(t) =

    K1K2

    n(t)2 K1yn(t)

    rR

    n(r, t)(r, t)yn(t) (19)

    A. Protocol description

    Let us define more in detail a receiver-driven congestioncontrol mechanism derived from (19). Data are requested viaInterests (one Interest per Data packet) in the order decidedby the application, according to a window-based AIMD mech-anism. The Interest rate yn(t) of flow n N is controlled viaa congestion window of Interests, Wn(t), kept at the receiverwhich defines the maximum number of outstanding Intereststhe receiver is allowed to send. Congestion window dynam-ics can be obtained from Eq.(19) by assuming the relationWn(t) = yn(t) Rn(t) = yn(t) Rn(t), with Rn(t) n(t)denoting the overall round trip delay experienced by flow n asaveraged over all its routes r Rn. This holds in virtue ofLittles law and it is commonly found in TCP literature (e.g.[4]).

    Window Increase. Wn(t) is increased by /Wn(t) uponeach Data packet reception. This corresponds to an incrementof (= 1 by default) each time a complete window of Interestsis acknowledged by Data reception.

    Window Decrease. According to (19), Interest rate/windowdecrease is proportional to route delays n(r, t). The rationalebehind is that by triggering the variations of such delay, mainlydue to bottleneck(s) queuing delay, the receiver can effectively

  • adapt Interest window/rate to anticipate network congestionstatus. In this way, the receiver realizes a Remote Active QueueManagement, similarly to [3], based on the estimate of perroute round trip delays. The bottleneck(s) queuing delay ofroute r is inferred by the measured round trip delay over timeRr(t), and determines a per-packet decrease probability func-tion, pr(t). In case of window decrease, Wn(t) is multipliedby a decrease factor < 1.

    Route Delay Monitoring. More precisely, when an Interestis sent out, the receiver measures the instantaneous roundtrip delay as the time elapsed until the reception of thecorresponding Data packet. To ease the notation let us omitin the following the index n. Estimates of the minimum andmaximum round trip delay associated to a given route r (Rmin,rand Rmax,r) are maintained via an history of samples (a slidingmeasurement window of 30 samples is kept by default in ourimplementation), excluding retransmitted packets. Note that thesamples can be distinguished per route via the route labelintroduced in ICN Data packets as proposed in [8]. Also,consider that the number of monitored routes per flow resultsto be limited to a few by a minimum threshold of utilizationset to 20 packets (cfr.[8]). This prevents a large flow state to bekept at the receiver. A complete window of samples is requiredbefore triggering a Interest window decrease.

    The measured instantaneous round trip delay Rr(t) and theestimates of Rmin,r and Rmax,r concur to determine over timethe probability pr(t) of a window decrease. pr(t) is assumedto be a monotonically increasing function of Rr(t) (e.g. linearas in [8]), which we consider in our experiments: pr(t) goesfrom a minimum value pmin (by default set to 105) up to amaximum value pmax 1 when above Rmax,r(t),

    pr(t) = pmin + pmaxRr(t)Rmin,r(t)

    Rmax,r(t)Rmin,r(t) (20)

    with pmax = pmax pmin. Rmin,r(t), Rmax,r(t) indicateestimates at time t. Whenever Rmin,r(t) = Rmax,r(t) in awindow of samples, we take a decrease probability equal topmin. The resulting evolution of W (t) is:

    d

    dtW (t) =

    R(t) W (t)

    rR

    pr(t)r(t)W (t)

    Rr(t), (21)

    By taking y(t) = W (t)/R(t), K1 = , K2 = / one finds(19). The addition of a slow start phase and details for PITtimer setting are taken from [8], while here we comment on aspecific feature of ICN.

    Interest scheduling. The Interest controller at the receivermay perform request scheduling to decide what to request andwhen, according to application constraints and user prefer-ences. Like in BitTorrent or Swift, but directly within transportlayer, the client exploits local information to prioritize Data re-quests in the communication swarm. Smart Interest schedulingin the transmission window is a powerful differentiator w.r.t.TCP-like content-agnostic transmission. All its possibilities arebeyond the scope of this paper. However, in our work weexploit it to perform Interest re-transmission for applicationswith loss recovery time constraints. This is made by introducingas a rule that Interests to be retransmitted are always put infront of the transmission window after PIT timer expiration.

    V. DYNAMIC REQUEST FORWARDING

    As observed in Sec.II-B, in ICN, not only the receiver,but also in-network nodes may perform dynamic packet-by-packet request scheduling by taking on-the-fly decisions on theselection of forwarding interfaces (in-network request schedul-ing). Under our problem formulation in Sec.III, the distributednature of request forwarding is enabled by the decompositionof LC w.r.t. in-path nodes and at each node the local objectiveto minimize is given by (15). To derive a family of optimaldistributed forwarding algorithms let us first prove that

    Proposition 5.1: The local minimization of (15) at everynetwork node i V is equivalent to the minimization of ni ,for all n N .

    Proof: Let us start by imposing the Karush-Kuhn-Tuckerconditions to Eq.(15), that is:

    Cjixnji

    =Cjiji

    jixnji

    =Cjiji

    = ni nj (22)

    for all ingress interfaces j (i) such that xnji > 0 andCjixnji

    Clixnli

    , l (i) (23)

    Eq. (23) states that the interfaces selected for Interest forward-ing at node i are those minimizing the cost derivatives. Thisimplies that there can be one or multiple interfaces with equaland minimum cost derivative. For every of such interfaces, letus iterate the reasoning and apply (22) at all subsequent hopsj, j+1, j+2, ..., r up to one of the sources s S(n). At the lasthop minimization of the cost derivative corresponds directly tothe minimization of nr , while at previous hops the variable tobe minimized is the difference nknk+1 for k = j+1, ..., r andni nj at the first hop. Clearly, starting from the source, onceminimized nr and denoted by

    n,r its value, the minimization

    of (nr1n,r ) is equivalent to the minimization of nr1 andso on until ni .

    Thus, we can conclude that a family of optimal distributedalgorithms for request forwarding can be derived by simplyminimizing ni at each node i and n N .

    To compute ni we apply a gradient algorithm on theLagrangian LC(xxx,) and obtain (See [5]):

    dnidt

    = ni

    j+(i)

    xnij

    l(i)xnli

    (24)Observe that, by invoking the flow balance between Interestand Data over a given interface, xnij = x

    nji. Hence,

    ni (t)

    turns out to be a measure of the total flow rate unbalance atnode i, which is available at an ICN node and equal to thenumber of pending Interests in the PIT. Observe that the PITincrease rate in (24) corresponds, in practice, to the Interestrates xnji, while the decrease term to Data rates (consumingPIT entries). This implies that the optimal strategy consists inminimizing the total number of pending Interests at node i.The intuition behind is that the number of pending Interestsreflects (i) content proximity: path length and response timeassociated to a given content; (ii) congestion status, i.e. thequality of residual path towards repository/closest copy.

    To derive an optimal interface selection algorithm, let us

  • now distinguish the Interests per output interface and applyan equivalent of Littles law to couple number of Interestsand associated response time. Let us omit the indices n andi, as we focus on flow n and node i. =

    j(i) j =

    j(i) xjVRTTj(xj), where VRTTj(xj) is the averageresponse time at output interface j. Hence, at each out-put interface, this requires to solve the following problem:max

    j(i) xjVRTTj(xj), subject to

    j(i) xj = y

    and xj 0. Assuming that: 1) VRTTj(xj) are monotoni-cally increasing with xj , 2) VRTTj(0) = VRTTmin, and 3)VRTTj(xj) is differentiable, the optimal allocation can beeasily proved by imposing KKT conditions, which give: xj =1j (), with unique solution of the equation

    j 1j () =

    y.

    Observation 5.2: Observe that this algorithm has in prac-tice two major drawbacks: (i) it requires explicit knowledgeof VRTT, (ii) it just exploits a subset of paths depending onthe value of y. This latter results from the fact that j(xj) =VRTTmin + xjVRTT(xj). As VRTTj(0) = VRTTmin. Thus > 0 : 1j () = 0, . This means that slowerinterfaces could never be probed for some y.

    As we observed in Sec.II-B, two important challenges ICNneeds to cope with are the unknown sources and the accrueddelay variability due to in-network caching.

    To guarantee optimal interface selection over time throughcontinuous monitoring of the interfaces, one needs a slightlydifferent objective at each output interface of node i, whichminimizes the number of pending Interests associated to themost loaded output interface, i.e. min maxj j(xj), subject to

    j xj = y xj 0, The objective can be attained very easily,by observing that the optimal allocations are such that j = for all interfaces j +(i). Consider now an interval of timeT , such that every interface j has been probed and selected,i.e. xj(t) = y(t), for a fraction of time Tj over T =

    j Tj

    and zero elsewhere. Thus,

    = limT

    1

    T

    T0

    j(t)dt = limT

    T0xj(t)VRTTj(xj(t))dt

    l Tl

    =Tjl Tl

    yVRTTj(y) = j j (25)

    where j is the average measured number of pending Interestsat interface j, j = /j represents the probability to selectinterface j, and 1/ =

    j j is a normalization constant.

    This algorithm does not require the explicit knowledge of theVRTTj and probes slow interfaces as well, with rate xj =j/VRTTj . The algorithm only requires to locally measurethe average number of interests per used interface, which is anavailable information in CCN by design.

    A. Request forwarding algorithm

    Request forwarding decisions are based on the selectionof longest prefix matching interfaces in the FIB. Indeed, FIBentries specify name prefixes rather than full object names andclearly appears to be unfeasible to maintain per-object nameinformation. We apply optimal interface selection algorithm peroutput interface and per prefix rather than per objects. Noticethat applying the same transmission strategy to different objectswith the same name-prefix preserves reasonably limited FIB

    size at the cost of introducing a potential loss of performancew.r.t. the optimal strategy. This may occur when flows sharingthe same name-prefix might have a significantly different setof sources (See Sec.VI-E-b for more details).

    Let us now describe the steps of the algorithm summarizedin Algorithm 1. At each Data/Interest packet reception, thenumber of Pending Interests (PI) associated to a given FIBentry (thus to a name prefix) and to a particular outputinterface is updated (FIB PI Incr/FIB PI Decr). Based on theinstantaneous value of PI, a moving average (with parameter0.9) is regularly updated and used to recompute interfacesweights (FIB Weight Update). A random weighted algorithmis used to select the output interface for each incoming Interest(RND Weight). For a given prefix/output interface, weights arenormalized w.r.t. the sum of all weights associated to availableinterfaces for that particular prefix. At the beginning, eachinterface has the same weight equal to one and the randomizedforwarding process is uniform over available output interfaces.

    Algorithm 1 Request forwarding algorithm1: At Data Packet Reception (name, face in)2: if (!PIT miss) then3: Forward Data(face out);4: FIB PI Decr(face in, prefix);5: FIB Weight Update(face in, prefix);6: else Drop Data;7: end if8: At Interest Packet Reception (name, face in)9: if (CACHE miss && PIT miss) then

    10: (prefix,weight list)=FIB Lookup(name);11: f*=RND Weight(weight list);12: FWD I(f*);13: PIT I Update(name,f*);14: FIB PI Incr(f*, prefix);15: FIB Weight Update(f*, prefix);16: else CCN standard processing;17: end if18: At PIT T imeout(name, prefix, face)19: FIB PI Decr(face, prefix);20: FIB Weight Update(face, prefix);21: procedure FIB WEIGHT UPDATE((face, prefix))22: avg PI(face, prefix) avg PI+(1)I(face, prefix);23: w(face, prefix) 1/avg PI(face, prefix);24: end procedure

    VI. EXPERIMENTAL EVALUATION

    A. Implementation in the CCNx prototype

    We implemented the above-described protocols in theCCNx prototype [1] by developing additional modulesfor the release ccnx-0.6.1 (August 8, 2012) but thatcan be run up to ccnx-0.7.1 (February 14, 2013). Theadditional modules include a standalone application,icp_multipath_ccn.{c,h} running at the user withcongestion control functionalities; a path labeling mechanismin the ccnd daemon; the implementation of the requestforwarding strategy in strategy_callout(), which isthe function exposing the transmission strategy layer in ccnd

  • and a novel cache manager, in ccnd for the content store.The implementations are briefly summarized below, while adetailed description is out of the scope of this paper.(i) Congestion control: the protocol presented in Sec.IV-A, isimplemented at the user, together with an Interest scheduler,which decides the Interests to insert in the transmissionwindow. Path labeling has been implemented via a sequenceof unique node identifiers collected by the data packet in thedownstream and hashed in an additional XML packet fieldpath_label.(ii) Request forwarding: The CCNx implementation requiresa new data structure, ccn_pin_record to track theevolution of the number of pending Interests on a givenface, for a name-prefix. In CCNx, prefix entries are stored innameprefix_entry (which is a name component trie) andccn_pin_record is instantiated by default per FIB entry.(iii) Caching: The ccnd daemon implements FIFOcaching with replacement based on a periodic clean upof the content store. We have implemented a frameworkto manage the content store with many replacementpolicies ccnd_cache.{c,h}, including LRU, used inthe experiments presented below.(iv) Virtual content repository: We have optimized theccndelphi application, developed at WashingtonUniversity [26], which significantly simplifies tests usingvery large repositories. Code profiling highlights that themain bottleneck of CCNx prototype is represented by thefrequent coding/decoding operations between wire encodedand XML decoded packet formats (see [26] for a descriptionof throughput limitations up to 150Mbps in ccnx-0.4.0). Suchlimitations clearly affect the parameter setting in our tests.

    B. Experimental setting

    The experimental environment used to perform our evalua-tion is a large scale grid of general purpose servers, Grid5000(see [2] for more details on the available network equipment,servers hardware and software). The infrastructure allows toboot custom Linux kernel images (Debian wheezy 3.2.35-2x86 64 GNULinux in our tests), including the CCNx softwaredescribed in Sec.VI-A. Servers reservation and remote kernelboot services allow to manage large scale experiments (fewhundreds in this paper). Our tests rely on five blocks:(i) Network topology: We use virtual interfaces based on tapIP tunnels as a link abstraction layer, and kernel token bucketshapers to create our substrate overlay network topology withcustomized link rates, in the grid.(ii) CCNx network: A CCNx overlay network is built on top ofthe IP overlay topology and name based routing is configuredaccording to pre-computed FIBs to make content reachable toclients.(iii) Catalogs and Users: objects are made available to usersthrough the virtual repositories described in VI-A. The ap-plication icp_multipath_ccn retrieves a fixed amount ofnamed content from the network before closing.(iv) Experiment launching: The experiment is configured andmonitored using a centralized application that orchestratesthe test issuing ssh exec passwordless commands on remoteservers. Tests are run by launching parallel clients workloads.(v) The dynamic workload used in the test is derived fromdata traces collected at the GGSN of an operational mobile

    network. From such measurement point we extract a peak hourof HTTP traffic for an area serving about 50 000 differentobserved users, 200 active at the same time. From this samplewe obtain the amount of downloaded content, its popularityand the amount of shareable data. In one hour we measured60GB of data, 35% sharable (percentage of bytes requestedmore than once). We tested popularity against Zipf, Geometricand discrete Weibull and found the latter to be the best fitwith shape around 0.3 (long tailed). Such workload model isadopted in the tests presented hereafter employing a dynamicworkload, down-scaling the amount of available content andnumber of active users, due to CCNx throughput limitations.

    C. Scenario 1: Efficiency in multipath flow control

    We first focus on a simple multipath network scenario (seeFig.1), where users are connected to four repositories via fournon disjoint paths. Users in the unique access node #6 retrievetwenty different named objects with a common prefix (e.g.ccnx://amazon.com/{object1,...,object20}). Two sets of linkcapacities are separately considered, with small and large rangeof values (Case I and II), to assess the efficiency of the per-routeremote delay control in presence of highly different delays(up to 1:25 ratio). In this scenario, we neglect the impact ofcaches studied later on. Fig.2 reports the average link utilizationin the two cases in comparison with the expected optimalusage. Notice that, in both cases, links (0, 4), (1, 4), (5, 6)are bottlenecked and achieve full utilization. Instead, links(2, 5), (3, 5) are not bottlenecked and the request forwardingalgorithm load balances requests arriving at node 5 in a nearly1:1 ratio. Fig.3 shows the split ratios, , over time at nodes#4, #5 and #6 towards nodes #1, #2 and #4 respectively (theremaining traffic is routed towards #0, #3, #5). As expected,split ratios converge to the optimal values. However, slowerconvergence time can be observed at the client node #6, dueto a globally higher response time, compared to in-path nodes,which slows down pending Interest number updates.

    D. Scenario 2: Impact of in-network caching

    We consider a typical CDN scenario to evaluate the impactof in-network caching on the proposed congestion control andforwarding mechanisms. The topology is composed by fifteennodes and eleven routes, available to each user to retrievecontent from intermediate caches (if hitting) or repositories.Content catalog, cache sizes, and popularity are synthesizedfrom the traffic measurements described above: each repositorystores the entire content catalog composed by 1 000 objectsdivided in 1 000 pkts of 4kB. Objects requests arrive fromleaf nodes (users) according to a Poisson process of rate0.1 objects/s and a Weibull popularity distribution with shape= 0.35 and scale=1 000. Leaf nodes, i.e. nodes from #7to #10, have a fixed cache size of 100MB, while, #2-#6from 200MB to 2.8GB in the different presented tests. Eachexperiment lasts two hours (almost 3 000 object requests perexperiment) and the output is averaged over five runs. Realizedsplit ratios for different cache sizes and expected optimalvalues with no caches are reported in the table in Fig. 5.Fig.6 reports the server load reduction on links (2, 5), (3, 5)and the total bandwidth savings by increasing cache sizes.Bandwidth saving is defined as

    i,j xi,j(S=0)xi,j(S)

    i,j Ci,jwhere

  • Fig. 1. Scenario 1: Topology Case I (Case II).

    Case I Case IIxi,j [Mbps] xi,j [Mbps]

    Link Experim/Optim Experim/Optim(0,4) 4.74 / 5 1.79 / 2(1,4) 9.18 / 10 18.35 / 20(2,5) 2.44 / 2.5 2.53 / 2.5(3,5) 2.40 / 2.5 2.46 / 2.5(4,6) 13.91 / 15 20.12 / 22(5,6) 4.82 / 5 4.85 / 5

    Fig. 2. Scenario 1: Average link utilization.

    0

    20

    40

    60

    80

    100

    0 5 10 15 20 25 30

    Split

    ratio

    [%]

    Time [s]

    node 4

    node 5

    node 6

    experimentaloptimal

    Fig. 3. Scenario 1: Split ratio convergence.

    Fig. 4. Scenario 2 topology.

    Optimal ExperimentalLink 0 Cache 0 6.4GB 14.4GB(4,2) 33.33 36.13 37.76 38.11(5,2) 66.66 63.21 59.58 57.65(6,2) 50 49.25 49.63 49.53(7,4) 50 51.93 55.25 56.99(8,4) 50 47.61 47.79 47.74(9,5) 50 50.51 56.30 57.56

    (10,5) 66.6 65.61 71.32 73.87

    Fig. 5. Scenario 2: Average split ratios.

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 2 4 6 8 10 12 14 16Total cache size [GB]

    Total bandwidth savingRepository bandwidth saving

    link2,5 loadlink3,5 load

    Fig. 6. Scenario 2: Bandwidth/Storage tradeoff.

    S is the total cache size in the system, expressed in pkts,xi,j(S = 0), xi,j(S) are link rates in absence and presence ofcache, respectively. Of course, bandwidth savings increase withcaches sizes and load at repositories decreases accordingly, butmost of the advantages are achieved with less than 10GB ofcache size (50% of requested data). In Fig. 5, we also reportthe average split ratios at the different network nodes whenvarying cache sizes. Face selection adapts (e.g. at nodes #5,#9, #10) to the availability of network caching by reducingload on links (2, 5) and (3, 5), reported to decrease from 0.8to 0.4 (see Fig.6). The bandwidth bottlenecks are moved downfor the most popular objects (cached downstream), affectingoptimal bandwidth sharing and split ratios as a result (similarconsiderations are made in [7]).

    E. Scenario 3: Request forwarding scalabilityWe consider the performance of the presented pro-

    tocols, in a backbone network topology, The Abilenetopology in Fig.7. There are three repositories at nodes#12, #16, #19, each storing content under a given nameprefix (respectively, ccnx:/amazon.com, ccnx:/google.com,ccnx:/warner.com). Each catalog has the same characteristics asthat in the previous scenario. Users requests follow a Poissonprocess with different rates: for nodes #11, #13, #14, #15, #17,#18, #20, #21 rates are respectively 45, 48, 36, 42, 30, 24,30, 33 objects per 100 seconds. In order to have a stationarynumber of data transfers in the network, rates are chosen tokeep the total offered load below network service capacity.

    a) Single vs Multiple Path comparison: While aggregateperformance always improve using multiple paths, we reportthe user performance for the different name prefixes. Fig.8shows, as slices one on the of the other, the average deliverytimes per name prefix and per users access node in the two

    cases: single and multiple paths. The multipath solution reducessignificantly delivery times, except at nodes #14,#15, whichexperience a little higher latency due to the increased number offlows, exploiting links (3, 4), (4, 5), which fairly share availablebandwidth. Still, the overall gain using multipath is significant.

    b) Object name vs Prefix request forwarding: Band-width sharing is clearly affected by in-network caching. Indeed,content can be retrieved from different locations according totheir popularity and in principle, nodes should route requests ona per-object name basis to capture such dependency.However,the approach suffers from scalability issues in terms of FIBsize and results unfeasible for typical catalog sizes. Thisconsideration motivates our design choice to route requestsbased on prefixes rather than names. In this scenario, wequantify the impact of forwarding granularity and observe thatthe number of object names that require separate FIB stateis much smaller than expected. We give here an intuition.When content popularity under a given prefix is uniformlydistributed, caching becomes irrelevant: all flows are routed upto the repositories with a single prefix FIB entry. When contentpopularity is long-tailed, the small portion of popular cacheableobjects would benefit from per-object name forwarding. Thus,the amount of useful additional FIB state is of the same orderof the most popular objects stored in a local cache.

    Here, we consider a single name prefix (ccnx:/google), asingle repository (repo #16, storing 1000 objects of size 1000pkts of 4KB) and a single access node (#3) with differentbidirectional link capacities for links (3,4), (4,5)=25Mbps ,(5,7)=5Mbps and (7,8), (8,3)=50Mbps. We also insert twocaches at nodes #7-#8 of sizes 100MB (25 objects). Requestsarrive to node #7 according to a Poisson process of rate 1object/s according to a zipf popularity with parameter = 1.3.Under this setting, the most popular objects (small rank) are

  • Fig. 7. Scenario 3 topology.

    0

    50

    100

    150

    200

    250

    300

    350

    11 13 14 15 17 18 20 21

    Del

    iver

    y Ti

    me

    [sec

    ]

    Node ID

    ccnx:/amazon.comccnx:/google.comccnx:/warner.com

    SinglepathMultipath

    Fig. 8. Scenario 3: Average DeliveryTime at each node.

    0

    5

    10

    15

    20

    25

    30

    0 5 10 15 20

    Del

    iver

    y tim

    e [s]

    Object rank

    Object-namePrefix

    Fig. 9. Scenario 3: Object name vs prefixforwarding granularity.

    Fig. 10. Scenario 4: Topology.

    0

    200

    400

    600

    0

    200

    400

    600

    0 400 800 1200 16000

    200

    400

    600

    From16 to12 Int

    eres

    t rate

    [pkts/

    sec]

    From16 to15

    From16 to17

    Time

    Fig. 11. Scenario 4: Time evolution.

    Before Failure During Failure After Failure0

    50

    100

    150

    200

    250

    300

    Deliv

    ery

    Tim

    e [se

    c]

    Requests from Node 17 Requests from Node 16 Requests from Node 15 Requests from Node 14

    Fig. 12. Scenario 4: Delivery Time.

    retrieved from the cache of node #8, twice as fast as thealternative paths to the repositories (#3, #4, #5). We runtwo sets of experiments: one with per-prefix forwarding andthe other with additional FIB state of thirty entries on aper-object basis. As shown in Fig.9 the performance of themost popular objects is slightly better, but, more important,they free bandwidth in the upstream for the majority of lesspopular objects which get considerable performance gains. Webelieve that a good tradeoff can be found via a hybrid solutionwhere the most popular items (recognized via the number ofaggregated Interests in the PIT) are separately monitored andaddressed in the FIB, keeping the others under their prefix.

    F. Scenario 4: Reaction to link failure in a mobile back-haul

    Traffic back-hauling for LTE mobile networks is usuallybased on over-provisioned links and highly redundant topolo-gies. The objective is to support traffic growth in a reliableway, over multiple access technologies (e.g. HSPA, LTE). Weselect this network scenario to test protocols responsivenssto link failure. In Fig.10, we report a representation of a smallpart of such network segment with down-sized link rates (fromGbps to Mbps). The repositories are located at the place ofregional/national PoPs. A failure event occurs at time t = 500sas in Fig.10: two links, (16-12) and (7-4) are dropped byremoving the associated CCNx FIB entry at nodes #16 and#7. The workload is the same as in scenario #2 with the sameaverage load. Let us focus on node #16. Rate evolution overtime (in Fig.11) shows how congestion control and forwardingprotocols rapidly adapt to the different network conditions.Before the failure, request from node #16 are equally splittedamong nodes #12,#15,#17, i.e. split ratios are equal to 1/3.During the failure, they grow up to 1/2 for nodes #15,#17,

    to compensate for link failure over (16, 12). Once (16, 12)becomes available, the request forwarding protocol sends mostof the Interests across such link to probe the response time,until the number of pending Interests on (16, 12) reaches thesame level attained by the other two faces and the split ratiosstabilize to the optimal values.

    In Fig.12, we report the average content delivery timeestimated at the receiver at the different access nodes in thethree test phases (requests for different nodes are plotted oneon top of the other). Users performance are determined bythe maximum throughput of the topology (an aggregate of150Mbps at the repositories) that is fairly shared. Howeverusers at node #16 are forced to use longer paths during thefailure and pay for more network congestion.

    VII. RELATED WORK

    A large body of work addresses joint multipath routingand congestion control in TCP/IP networks. In [11],[14],[22]authors study the stability of multipath controllers and derive afamily of optimal controllers to be coupled with TCP. Varioustransport protocol implementations based on TCP are proposed(e.g. mTCP [28], MPTCP [23]), either loss-based or delay-based ([6]).Multipath TCP [23], as proposed by the IETFworking group mptcp, improves previous designs in termsof efficient network usage utilization and of RTT unfairnesscompensation. All proposals require to establish separate per-path connections (subflows) with a given number of knownsources that do not vary over time. Subflow may be uncoupled,i.e. independently managed, like in [28], or coupled as inMPTCP. Such approaches are not suitable to ICN, where thesources are a priori unknown and may vary over time, subject toin-network cache dynamics and on-the-fly request forwarding

  • performed at network nodes. Also, unlike our work, previousmultipath transport efforts assume a sender-based windowedcongestion control. However, there exists a fair amount of workon receiver-driven congestion control that the interested readercan find summarized in [18].

    In the context of ICN, an adaptive forwarding mechanismis sketched out in [25]. Interface ranking is performed at NDNrouters according to a three colors scheme. Congestion controlin[25] is still under definition according to the authors andenvisaged via Interest NACKs (Negative Acknowledgments)sent back to the receiver. Such explicit congestion notifica-tion approaches may require message prioritization (openingto security weaknesses) and bring additional overhead. Ina similar spirit, TECC [24] looks at the joint problem ofrouting and caching in CCN within an optimization framework,mostly focusing on bandwidth and cache provisioning at longertimescales.

    VIII. CONCLUSIONS

    Previous work on ICN mostly magnifies one specific as-pect of the paradigm at a time, like in-network caching oradaptive forwarding, while devoting less attention to the novelcommunication model as a whole. Overall end-user experienceand network-wide resource utilization are often left out of thepicture. Also, one may argue that the limited benefits of asingle building block taken apart, like in-network caching, canbe approached by off-the shelf solutions, so missing the biggerpotential of the ICN idea head-to-toe.

    In this paper, we tackle the problem of a joint multipathcongestion control and request forwarding in ICN and designscalable, fully distributed and inter-working mechanisms forefficient point-to-multipoint content delivery. More precisely,we formulate and solve a global optimization problem with thetwofold objective of maximizing end-user throughput, whileminimizing network cost. Our framework explicitly accountsfor the interplay between multipath flow control, in-pathcaching and in-network request scheduling. As a result, wederive: (i) a family of receiver-driven multipath controllers,leveraging a unique congestion window and per-route delaymonitoring. This also proves the global optimality of [8]. (ii)A set of optimal dynamic request forwarding strategies basedon the number of pending requests which is the base for thedesign of a fully distributed algorithm run at ICN nodes. Notethat, beyond ICN, the problem of multipath congestion controlwith a priori unknown and variable sources (due to in-pathcaching) has never been considered before.

    To assess the performance of the design, we have im-plement the proposed protocols in CCNx [1] and set upa testbed for large scale experimentation. The experimentalevaluation in different network scenarios confirms efficiencyand global fairness, as expected by the optimal solution, androbustness w.r.t. the inherent variability in delay brought byin-path caching and network congestion.

    However, we have observed important limitations of theCCNx forwarding engine preventing to scale beyond 150Mbpsof throughput. As a future work, we plan to investigate thedesign of name-based forwarding engines on a high-speed ICNdata plane [21]. Moreover, we foresee a theoretical analysisof PIT/FIB dynamics and plan to develop realistic models topredict their dynamics.

    REFERENCES[1] The CCNx project, Palo Alto Research Center. http://www.ccnx.org.

    [2] The Grid5000 Experimentation Test-Bed. http://www.grid5000.fr.[3] D. Ardelean, E. Blanton, and M. Martynov. Remote active queue

    management. In Proc. of ACM NOSSDAV, 2008.[4] F. Baccelli, G. Carofiglio, and M. Piancino. Stochastic Analysis of

    Scalable TCP. In Proc. of IEEE INFOCOM, 2009.[5] D. Bertsekas and J. Tsitsiklis. Parallel and Distributed Computation:

    Numerical Methods. Athena Scientific, 1997.[6] Y. Cao, M. Xu, and X. Fu. Delay-based congestion control for multipath

    TCP. In Proc. of IEEE ICNP, 2012.[7] G. Carofiglio, M. Gallo, and L. Muscariello. Bandwidth and storage

    sharing performance in information centric networking. In Proc. ofACM SIGCOMM ICN, 2011.

    [8] G. Carofiglio, M. Gallo, L. Muscariello, and L. Papalini. Multipathcongestion control in content-centric networks. In Proc. of IEEEINFOCOM NOMEN, 2013.

    [9] R. Chiocchetti, D. Rossi, G. Rossini, G. Carofiglio, and D. Perino.Exploit the known or explore the unknown: Hamlet-like doubts ininformation-centric networking. In Proc. of ACM SIGCOMM ICN, 2012.

    [10] D. Han, A. Anand, and F. e. a. Dogar. XIA: Efficient support forevolvable internetworking. In Proc. USENIX NSDI, 2012.

    [11] H. Han, S. Shakkottai, and e. a. Hollot. Multi-path TCP: a jointcongestion control and routing scheme to exploit path diversity in theInternet. IEEE/ACM Transactions on Networking, 14(6):12601271,Dec 2006.

    [12] V. Jacobson, D. Smetters, J. Thornton, and al. Networking namedcontent. In Proc. of ACM CoNEXT, 2009.

    [13] T. Janaszka, D. Bursztynowski, and M. Dzida. On popularity-based loadbalancing in content networks. In Proc. of ITC24, 2012.

    [14] F. Kelly and T. Voice. Stability of end-to-end algorithms for joint routingand rate control. ACM SIGCOMM CCR, 35(2):512, 2005.

    [15] F. P. Kelly, A. K. Maulloo, and D. K. Tan. Rate control for communica-tion networks: shadow prices, proportional fairness and stability. Journalof the Operational Research society, 49(3):237252, 1998.

    [16] T. Koponen, M. Chawla, B. Chun, A. Ermolinskiy, K. Kim, S. Shenker,and I. Stoica. A data-oriented (and beyond) network architecture. InProc. of ACM SIGCOMM, 2007.

    [17] T. Koponen, S. Shenker, and H. e. a. Balakrishnan. Architecting forinnovation. ACM SIGCOMM CCR, 2011.

    [18] A. Kuzmanovic and E. W. Knightly. Receiver-centric congestion controlwith a misbehaving receiver: Vulnerabilities and end-point solutions.Elsevier Computer Networks, 2007.

    [19] E. Nordstrom, D. Shue, and P. e. a. Gopalan. Serval: An End-Host Stackfor Service-Centric Networking. In Proc. USENIX NSDI, 2012.

    [20] R. Srikant. The Mathematics of Internet Congestion Control (Systemsand Control: Foundations and Applications). SpringerVerlag, 2004.

    [21] M. Varvello, D. Perino, and L. Linguaglossa. On the design andimplementation of a wire-speed pending interest table. In Proc. of IEEEINFOCOM NOMEN, 2013.

    [22] W.-H. Wang, M. Palaniswami, and S. H. Low. Optimal flow control androuting in multi-path networks. Performance Evaluation, 52(2-3):119132, 2003.

    [23] D. Wischik, C. Raiciu, A. Greenhalgh, and M. Handley. Design,implementation and evaluation of congestion control for multipath TCP.In Proc. of USENIX NSDI, 2011.

    [24] H. Xie, G. Shi, and P. Wang. TECC: Towards collaborative in-networkcaching guided by traffic engineering. In Proc. of IEEE INFOCOMMini conference, 2012.

    [25] C. Yi, A. Afanasyev, L. Wang, B. Zhang, and L. Zhang. Adaptiveforwarding in named data networking. ACM SIGCOMM CCR, 42(3),2012.

    [26] H. Yuan, T. Song, and P. Crowley. Scalable NDN Forwarding: Concepts,Issues and Principles. In Proc. of IEEE ICCCN, 2012.

    [27] L. Zhang and al. Named Data Networking (NDN) Project, 2010. http://named-data.net/ndn-proj.pdf.

    [28] M. Zhang, J. Lai, A. Krishnamurthy, L. Peterson, and R. Wang. Atransport layer approach for improving end-to-end performance androbustness using redundant paths. In Proc. of USENIX ATEC, 2004.