an intelligent rule management scheme for software defined …static.tongtianta.site › paper_pdf...

12
Computer Networks 144 (2018) 77–88 Contents lists available at ScienceDirect Computer Networks journal homepage: www.elsevier.com/locate/comnet An intelligent rule management scheme for Software Defined Networking Lei Wang a , Qing Li b,, Richard Sinnott c , Yong Jiang a , Jianping Wu d a Graduate School at Shenzhen, Tsinghua University, Shenzhen, China b Southern University of Science and Technology, China c Computing and Information Systems Department, University of Melbourne, Australia d Department of Computer Science and Technology, Tsinghua University, Beijing, China a r t i c l e i n f o Article history: Received 9 December 2017 Revised 11 June 2018 Accepted 29 July 2018 Available online 31 July 2018 Keywords: Software-Defined Networking Rule management Rule update Cache a b s t r a c t Software Defined Networking (SDN) enables network innovation and brings flexibility by separation of the control and data planes and logically centralized control. However, this network paradigm compli- cates flow rule management. Current approaches generally install rules reactively after table misses or pre-installs them by flow prediction. Such approaches consume nontrivial network resources during in- teractions between the controller and switches (especially for maintaining consistency). In this paper, we explore an intelligent rule management scheme (IRMS), which extends the one-big-switch model and em- ploys a hybrid rule management approach. To achieve this, we first transform all rules into path-based and node-based rules. Path-based rules are pre-installed whilst the paths for flows are selected at the edge switches of the network. To maintain consistency of forwarding paths, we update path-based rules as a whole and employ a lazy update policy. Node-based rules are optimally partitioned into disjoint chunks by an intelligent partition algorithm and organized hierarchically in the flow table. In this way, we significantly reduce the interaction cost between the control and data planes. This scheme enforces an efficient sliding window policy to enhance the hit rate for the installed chunks. We evaluate our scheme by comprehensive experiments. The results show that IRMS reduces the total flow entries by more than 59.9% on average and the update time by over 56%. IRMS also reduces the flow setup requests by more than one order of magnitude. © 2018 Elsevier B.V. All rights reserved. 1. Introduction As an emerging networking paradigm, Software Defined Net- working (SDN) [1] is widely influencing the evolution of network architectures. Separating the control plane from the data plane and centralizing the intelligence of the network into the controller(s) provides essential conveniences for network management and al- lows acceleration of network innovations [2–4], whereas this cen- tralization introduces obstacles to flow rule management. Con- sidering flexibility, the controller typically installs rules reactively when a new flow incurs a table miss. However, this flexibility sac- rifices forwarding performance because frequent interactions be- tween the control and data planes cause nontrivial resource con- sumption and communication latency to increase. Corresponding author. E-mail addresses: [email protected] (L. Wang), [email protected] (Q. Li). The state-of-the-art rule management schemes focus on caching more rules in the data plane to reduce the performance penalties for the table misses. For instance, CAB [5] splits the rule space into many non-overlapping buckets and treats the rules in a bucket as a whole for installation and updates. A big challenge for these ap- proaches is the consistency of rules along with a forwarding path, since any inconsistency of the cached rules may require rule rein- stallation or even cause the wrong packet behavior. A more radical approach is installing the rules before flows oc- cur. DIFANE [6] and CacheFlow [7] are representative solutions of these proactive schemes. They firstly divide the rule set into sev- eral subsets according to rule dependencies and switch capacity and then distribute them on the certain selected switches. How- ever, such proactive schemes lose the ability to generate rules dy- namically according to the evolving network states. The high cost for updating is also an obstacle in these schemes, since any mod- ification of an individual match field or change of rule placement is likely to break the existing dependencies, and it will cause rule redistributing. Furthermore, installing all possible rules in advance https://doi.org/10.1016/j.comnet.2018.07.027 1389-1286/© 2018 Elsevier B.V. All rights reserved.

Upload: others

Post on 26-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

Computer Networks 144 (2018) 77–88

Contents lists available at ScienceDirect

Computer Networks

journal homepage: www.elsevier.com/locate/comnet

An intelligent rule management scheme for Software Defined

Networking

Lei Wang

a , Qing Li b , ∗, Richard Sinnott c , Yong Jiang

a , Jianping Wu

d

a Graduate School at Shenzhen, Tsinghua University, Shenzhen, China b Southern University of Science and Technology, China c Computing and Information Systems Department, University of Melbourne, Australia d Department of Computer Science and Technology, Tsinghua University, Beijing, China

a r t i c l e i n f o

Article history:

Received 9 December 2017

Revised 11 June 2018

Accepted 29 July 2018

Available online 31 July 2018

Keywords:

Software-Defined Networking

Rule management

Rule update

Cache

a b s t r a c t

Software Defined Networking (SDN) enables network innovation and brings flexibility by separation of

the control and data planes and logically centralized control. However, this network paradigm compli-

cates flow rule management. Current approaches generally install rules reactively after table misses or

pre-installs them by flow prediction. Such approaches consume nontrivial network resources during in-

teractions between the controller and switches (especially for maintaining consistency). In this paper, we

explore an intelligent rule management scheme (IRMS), which extends the one-big-switch model and em-

ploys a hybrid rule management approach. To achieve this, we first transform all rules into path-based

and node-based rules. Path-based rules are pre-installed whilst the paths for flows are selected at the

edge switches of the network. To maintain consistency of forwarding paths, we update path-based rules

as a whole and employ a lazy update policy. Node-based rules are optimally partitioned into disjoint

chunks by an intelligent partition algorithm and organized hierarchically in the flow table. In this way,

we significantly reduce the interaction cost between the control and data planes. This scheme enforces an

efficient sliding window policy to enhance the hit rate for the installed chunks. We evaluate our scheme

by comprehensive experiments. The results show that IRMS reduces the total flow entries by more than

59.9% on average and the update time by over 56%. IRMS also reduces the flow setup requests by more

than one order of magnitude.

© 2018 Elsevier B.V. All rights reserved.

1

w

a

c

p

l

t

s

w

r

t

s

(

m

f

m

a

p

s

s

c

t

e

a

e

h

1

. Introduction

As an emerging networking paradigm, Software Defined Net-

orking (SDN) [1] is widely influencing the evolution of network

rchitectures. Separating the control plane from the data plane and

entralizing the intelligence of the network into the controller(s)

rovides essential conveniences for network management and al-

ows acceleration of network innovations [2–4] , whereas this cen-

ralization introduces obstacles to flow rule management. Con-

idering flexibility, the controller typically installs rules reactively

hen a new flow incurs a table miss. However, this flexibility sac-

ifices forwarding performance because frequent interactions be-

ween the control and data planes cause nontrivial resource con-

umption and communication latency to increase.

∗ Corresponding author.

E-mail addresses: [email protected] (L. Wang), [email protected]

Q. Li).

n

f

i

i

r

ttps://doi.org/10.1016/j.comnet.2018.07.027

389-1286/© 2018 Elsevier B.V. All rights reserved.

The state-of-the-art rule management schemes focus on caching

ore rules in the data plane to reduce the performance penalties

or the table misses. For instance, CAB [5] splits the rule space into

any non-overlapping buckets and treats the rules in a bucket as

whole for installation and updates. A big challenge for these ap-

roaches is the consistency of rules along with a forwarding path,

ince any inconsistency of the cached rules may require rule rein-

tallation or even cause the wrong packet behavior.

A more radical approach is installing the rules before flows oc-

ur. DIFANE [6] and CacheFlow [7] are representative solutions of

hese proactive schemes. They firstly divide the rule set into sev-

ral subsets according to rule dependencies and switch capacity

nd then distribute them on the certain selected switches. How-

ver, such proactive schemes lose the ability to generate rules dy-

amically according to the evolving network states. The high cost

or updating is also an obstacle in these schemes, since any mod-

fication of an individual match field or change of rule placement

s likely to break the existing dependencies, and it will cause rule

edistributing. Furthermore, installing all possible rules in advance

Page 2: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

78 L. Wang et al. / Computer Networks 144 (2018) 77–88

2

g

W

g

a

a

m

n

t

p

r

w

t

p

i

e

b

g

i

a

fl

w

p

a

t

s

t

m

r

t

r

i

m

d

a

a

a

s

i

l

o

C

o

t

d

c

C

t

w

s

[

p

T

p

u

i

m

c

imposes a heavy pressure on the flow tables of switches, since SDN

switches usually store the rules in the ternary content addressable

memory (TCAM), which is a scarce and expensive resource. Addi-

tionally, abundant match fields and fine-grained rules in SDN ag-

gravate memory pressure.

In our paper, we propose an I ntelligent R ule M anagement

S cheme ( IRMS ) that aims at providing a novel trade-off between

flexibility and forwarding performance. We maintain intelligence

at the network edge where interactions with the controller occur.

All the core switches concentrate on forwarding tasks to achieve

higher performance.

To achieve this, we classify flow rules as two types: path-based

rules and node-based rules . Path-based rules are a group of rules

that cooperate to enforce a routing policy in a forwarding path. We

calculate the possible paths of the network applications in advance

and pre-install all related path-based rules. To guarantee consis-

tency, IRMS treats the group of path-based rules as a whole and

ensure they have the same life cycle, i.e., they will be updated

together proactively by an update manager module and none of

them will be withdrawn reactively, e.g., due to timeout.

To keep the flexibility of SDN, we adopt an improved reactive

approach for node-based rules. We partition them into disjoint

chunks and employ hierarchical matching to eliminate rule depen-

dencies. We also employ an intelligent policy to install the chunks

according to the historical traffic and TCAM occupancy rate of the

edge switches.

We evaluate our scheme through a Mininet-based [8] emulation

with different topologies and rule sets. We compare our scheme

with both proactive and reactive schemes. Our results show that:

1) IRMS is more efficient in flow table management. It reduces the

number of flow entries by more than 59.9% over reactive schemes

and more than 60.8% over proactive schemes. 2) IRMS reduces the

flow setup requests by more than one order of magnitude and

achieves 80% cache rate on average. 3) IRMS reduces average up-

date time by at least 56% than both reactive or proactive schemes.

4) IRMS introduces less than 10% resource overheads measured by

the CPU and memory consumption.

The contributions of this paper are as follows:

• As far as we have known, we are the first to propose an intelli-

gent flow rule management scheme for SDN that employs both

proactive and reactive approaches for different types of rules.

• We construct a flow rule management model for SDN, which

keeps interactions between the controller and the switches at

the network edge.

• We prove that the chunk partition problem of node-based rules

in IRMS is NP-complete and design an intelligent partition al-

gorithm to solve it.

• We implement the prototype of IRMS and achieve significant

improvement in performance with low overheads.

The remainder of this paper is organized as follows.

Section 2 presents the background and motivation of our paper,

and Section 3 formally describe the rule management problem

and the design goals. We show the design of our scheme in

Section 4 and the key algorithms in Section 5 . Then we discuss

our hybrid update policy in Section 6 . Next, we present the simu-

lation results in Section 7 . Finally, we draw the conclusion in the

last section.

A preliminary version of this work appeared in a workshop

paper [9] which briefly discussed ideas about this hybrid rule

management scheme and the chunk partition algorithm. In this

paper, we formally describe the rule management problem and il-

lustrate the design decisions and details. We also develop novel al-

gorithms that help efficiently increase the cache hit rate for node-

based rules. Besides, we evaluate our prototype in different sizes

of network topology to valid the efficient of our design.

. Related work

Flow rule management has been a critical problem from the be-

inning of SDN. Our work is inspired by several previous works.

e cover them briefly as follows.

Reactive rule management: Ethane [10] , which is widely re-

arded as the origin of SDN, employs a typical reactive rule man-

gement mechanism. Its flow setup process is usually considered

s the standard of SDN, namely: 1) Switches forward the Packet-In

essage to the controller after determining that the packet does

ot match any active entries. 2) On receipt of the packet, the con-

roller decides whether to allow or deny the flow according to the

olicy. 3) If the flow is allowed, the controller computes the flow’s

oute and adds a new entry to the flow tables of all switches along

ith the path. However, frequent interactions between the con-

roller and the switches impede the scalability and communication

erformance.

CAB [5] aims at caching more rules to reduce the performance

mpairment. It partitions the rule space into several disjoint buck-

ts. If the flow matches one bucket, all the rules in the bucket will

e installed. This scheme focuses on managing the rules on a sin-

le switch. However, it is necessary to consider all the switches

n the network. For example, it is required to keep the switches

long a given forwarding path consistent, otherwise, whilst the

ow matches a certain bucket on some switches, it is still for-

arded to the controller as long as there is one miss-match in the

ath.

Proactive rule management: Proactive approaches aim to keep

ll traffic in the data plane instead of consulting with the con-

roller. As one example, DIFANE [6] partitions all rules over several

elected switches. Similarly, CacheFlow [7] installs popular rules in

he TCAM and other rules on software switches to handle miss-

atched flows. A common point for these schemes is that they are

equired to pay considerable attention to the dependency between

he rules [11] . If a rule is installed on one switch, all high-priority

ules whose match fields intersect with this rule are required to be

nstalled. However, well-planned rule partitions make rule updates

ore difficult, and this is especially challenging when rules change

ynamically according to evolving network states.

Rule cache policy: Different from the cover-set based caching

lgorithm used in CacheFlow [7] , Sheu and Chuo [12] propose

wildcard rule caching algorithm and a rule cache replacement

lgorithm considering temporal and spatial traffic localities. Our

cheme also considers the traffic localities through using the chunk

nstead of an individual rule. Moreover, we design a novel on-

ine cache optimization mechanism to install more chunks based

n historical matching experience. Another new research work

NOR [13] employs a leaf-pushing algorithm to generate non-

verlapping wildcard rule. Thus, the controller only needs to install

he matching wildcard rule without considering the rule depen-

ency. In our scheme, through the chunk partition algorithm, we

an solve the rule dependency problem in a lower overhead than

NOR.

One big switch model : This model [14] assumes that the en-

ire network is one big switch and the network behaviors are for-

arding the packets from one port to another. Many novel re-

earch works follow this idea. For instance, DIFANE [6] and VCRIB

15] can place a node-based rule to any switch along with a valid

ath and leverage all switches in the network to enforce a policy.

hey both direct traffic through intermediate switches that enforce

ortions of the policy, deviating from the routing policy given by

sers. Our work extends this model. We achieve this by formaliz-

ng the node-based and path-based rules and employing efficient

anagement approaches for them respectively according to their

haracteristics.

Page 3: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

L. Wang et al. / Computer Networks 144 (2018) 77–88 79

Fig. 1. Network model for a simple scenario.

3

3

f

c

T

s

d

s

q

a

l

s

c

o

s

I

s

s

a

c

e

a

w

E

c

E

t

I

h

m

l

I

T

e

b

m

M

a

n

A

t

s

o

w

e

t

h

o

i

a

t

f

e

H

s

a

d

a

r

a

fi

m

t

p

m

D

m

p

a

D

a

m

w

d

p

m

m

e

o

T

a

n

o

i

f

. Problem statement and motivation

.1. Problem statement

SDN allows various network applications (e.g., access control,

orwarding, packet classification, and traffic managing) to pro-

ess packets through deploying fine-grained rules on the switches.

he storage and placement of these rules is a challenge for SDN

witches. Currently, an SDN hardware switch can only accommo-

ate 2 k ∼ 20 k flow entries, since the high cost and energy con-

umption of the TCAM [16] . Moreover, such rules are often re-

uired to be frequently updated due to changes in network status

nd policies. To clearly describe the SDN rule management prob-

em, we first define some terminologies.

Network , N : In this paper, our scheme is designed for the

oftware-defined networks, and all intelligence of the network is

entralized on the controller. Thus, we can simplify the data plane

f the network into a quartet N � (V, E, I, O ) . V and E denote the

et of switches and links respectively. In addition, we explicitly use

to represent the set of the ingress switches and O for the egress

witches. We assume that all of the traffic comes from the ingress

witches and flows into the egress switches. This model is gener-

lized and can be adapted to various types of networks, e.g., data

enter networks, ISP networks and enterprise networks. The differ-

nce between these networks is only the selection of the ingress

nd egress switches. Fig. 1 shows a simple scenario of the net-

ork. In this network, V denotes all the switches including I 1 ∼ 2 ,

1 ∼ 3 , S 1 ∼ 8 , E denotes all the links between the switches in V . Ac-

ording to the definition of N , I 1 and I 2 construct the set I , and

1 , E 2 and E 3 are the elements of O .

Rule , R: In SDN, multiple applications running on the con-

roller platform lead to numerous policies and fine-grained rules.

n our paper, we abstract the rule as a five-tuple, i.e., R �(M, A, P, L, C) . The first element match space M is a list of packet

eader fields, and which fields are contained in M are deter-

ined by the upper-layer applications. Similar to the schemes

ike [5,17] , we do not simply treat M as a string of 0s and 1s.

n our paper, we keep the semantics of each looking-up phase.

hat means although each packet header fields of M is consid-

red as a flat sequence of 0s, 1s and wildcards( ∗), we do not

reak up or stitch the existing fields. For instance, in our experi-

ent, we choose the traditional 5-tuple as the matching fields, and

= (DI P, SI P, DPort, SPort, P rotocol) . Each header field is looked up

t a flow table along the pipeline of the switch, and this mecha-

ism is supported by current SDN switches. 1 The second element

1 Pipeline is supported by the OpenFlow specification after version 1.1.

m

d

T

denotes the instruction of a rule, which is the encapsulation of

he action sets. The actions of this action set are involved at each

tage of the lookup pipeline. The third element of R is the priority

f the rule. During the lookup process in the flow table, the rule

ith the highest priority will match the packet. L as the fourth el-

ment refers to the location of a rule. Here, we use the dpid of

he switch to record in the L and use nil to denote that the rule

as not been installed. The last element C denotes the life cycle

f the rule. The flow entry of SDN switch uses different time-out

ndicators to show the time to live of a rule, such as idle timeout

nd hard timeout. In our model, we focus on the precise time of

he rule living in the data plane. The element is an essential value

or the rule update policy. We omit other information of the flow

ntry, such as buffer id and cookies.

If we define the whole header space ( H ) as a hypercube, i.e.,

� { 0 , 1 } n , a packet header h can be considered as a point in the

pace H , and the match space of a rule R can be considered as

region in the header space. Since the match space of a rule is

efined as the union of expressions that may contain wildcards,

nd each expression containing wildcards is an interval in the cor-

esponding dimension. Here, we define that h � F 1 ⋃

F 2 ⋃

. . . ⋃

F K ,

nd F i indicates the value of a match field. If a point (packet header

eld h ) is involved in a region R[ M] , we consider that the packet

atches the rule R. As the rules are installed in the flow table of

he related switch, and the current SDN switch usually has multi-

le flow tables, we present a formal definition to describe the rule

atch operation clearly.

efinition 1 (Rule matching) . We set the conditions:

• T i � { R

i 1 , R

i 2 , . . . , R

i n }

• MT � {T 1 , T 2 , . . . , T K } • h � F 1

F 2 ⋃

. . . ⋃

F K • ̂ T i = { R

i j | R

i j ∈ T i , F i ∧ R

i j [ M] = F i }

• R

i ˜ j ∈ ̂

T i

• R

i ˜ j [ p] > R

i j [ p] , ∀ j � = ̃

j && R

i j ∈ ̂

T i

• A

i = R

i ˜ j [ A ]

• A =

{⋃ K i =1 A

i ∃ A

i ! = nil

∅ otherwise

We then define A ← Match (h, MT ) , i.e., a packet with header h

atch the multiple flow table MT , and gets the instruction A for

acket processing.

The whole rule matching operation is a pipeline process,

nd each stage enforces a single table matching operation. In

efinition 1 , MT denotes the multiple flow table of the switch,

nd T i is the single table at stage i . Every single table also contains

any rules. Here, we extend the basic bit manipulation ∧ with the

ildcard ( ∗). We assume that 1 ∧ ∗ = 1 and 0 ∧ ∗ = 0 . Thus, ̂ T i in-

icates a set that contains all rules in the table T i that match the

acket, i.e., the rules whose matching region contains the related

atch filed of the packet. A basic operation for the single table

atching is looking up the flow table to find a rule with the high-

st priority that matches the packet. According to Definition 1 , this

peration is finding the rule with the highest priority in the set̂

i and get its action. The result of the rule matching operation is

union of action of the matching rule at each stage. Hence, the

umber of the result action set is at most equal to the number

f the flow table along the pipeline. If there is no matching rule

n any table, the result set A is an empty set, and the packet will

ollow the default table-miss rule.

Rule management : In SDN, the fundamental operations of rule

anagement can be done in one of the three manners: installation,

eletion and update. Installing a new rule R new

into a flow table

i of switch S can be represented as T i = T i ⋃ { R new

} , R new

[ L ] =

k
Page 4: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

80 L. Wang et al. / Computer Networks 144 (2018) 77–88

4

4

fi

b

t

f

D

l

r

T

o

r

D

r

(

i

q

w

i

D

o

f

m

s

w

i

3

f

s

a

n

i

i

b

d pid (s k ) . Deleting an old rule R old from a flow table can be rep-

resented as T i = T i − { R old } , R old [ L ] = nil. While Updating an ex-

isting rule R j from a flow table can be represented as R j [ A ] = A new

or T i = T i − { R j } ⋃ { R new

} . The following representation of the up-

date operation means that in addition to the instruction ( A ), there

are other elements need to be modified, and this operation can be

seen as a combination of an old rule deletion and a new rule in-

stallation.

A significant challenge of rule management is that it not

only needs to manage the rules themselves but also to consider

whether the operation of the rules will affect the correct enforce-

ment of other rules. As only the rule with the highest priority

can match the packet, we should consider the dependency rela-

tions between the rules. That means if a rule should be installed,

all rules whose match spaces intersect with this rule and have

higher priority should also be installed. Similarly, if a rule should

be deleted, all rules whose match spaces intersect with this rule

and have lower priority should be deleted. Moreover, from a global

view of the network, the behavior of a rule in one switch may af-

fect the rules of other switches along with the same forwarding

path. Based on these, we summarize the rule management prob-

lem as four key subproblems:

1) When to install a rule on the switch is optimal? The rule can

be immediately installed after it is generated or at the time

when a table-miss event occurs. In fact, these different installa-

tion timings depend on whether the rule management scheme

is proactive or reactive.

2) Where to install the rules? The rules can be installed on the

right switch or other equivalent positions. Given a simple in-

stance, if there is a rule (192.168.1.1/26) → drop . It can be in-

stalled on the ingress switch or any switch along the forward-

ing path.

3) How many rules should be installed or deleted at one time?

Since the rule management is firstly required to ensure the cor-

rectness of network behaviors for all the packets, our scheme

should consider the rule dependencies. Additionally, network

communications usually have strong locality, which inspires us

to design an efficient cache scheme to reduce the performance

loss caused by the frequent communication between the con-

trol plane and data plane.

4) As the network policy or the network condition frequently

changes, how to design an efficient update mechanism? Be-

sides, consistency and low overhead are also the requirements

of the update mechanism. In this area, the-state-of-art rule

management schemes, no matter proactive or reactive schemes

have apparent defects.

3.2. Motivation

To manage flow rules efficiently and intelligently, our scheme

aims to achieve the following goals:

1) Correct: Correct is the fundamental and most important goal

for any rule management scheme. The flow must match and

enforce the correct rule according to the policy, no matter how

the scheme handles overlapping rules, i.e., our scheme must en-

sure all network behaviors are correct.

2) Flexibility: Flexibility is considered as the guarantee of network

innovations. Thus, installing all or part of the rules reactively is

vital. Without this, SDN cannot respond quickly to changes in

the network as claimed, and it may evolve into a similar solu-

tion to VLAN or MPLS.

3) High-performance: Minimal flow setup time is required. Thus,

any scheme should make every effort to reduce the interaction

that occurs between the controller and switches.

4) Resource-saving: Since TCAM is the scarce resource, any

scheme is required not to add pressure to the flow table. The

scheme is also expected to be a light-weight program and not

add excessive computational overheads.

5) Update-friendly: As network continually evolves, rule updates

are inevitable. Any scheme should aim at providing an intelli-

gent update approach, i.e., speeding up the rule update process

and making a minimal impact on existing rules.

. IRMS framework design

.1. Rule preprocessing

In order to reduce the complexity of the rule management, we

rst divide the rules into two types (i.e., path-based rules and node-

ased rules ), and then deal with them respectively according to

heir characteristics. Here, we also give the formalized definitions

or the two types of rules.

efinition 2 (Path-based rule R

P ) . We set the conditions as fol-

ows:

• PS = { R 1 , R 2 , . . . . R m

} • R 1 [ M] = R 2 [ M] = . . . = R m

[ M]

• R 1 [ A ] ↔ R 2 [ A ] ↔ . . . ↔ R m

[ A ]

• R 1 [ L ] ∈ N [ I]

• R m

[ L ] ∈ N [ O ]

• (R i [ L ] , R i +1 [ L ]) ∈ E 1 � i � m − 1

• R i [ L ] � = R j [ L ] i � = j, 1 � i � m, 1 � j � m

We then define each element in PS as a path-based rule and

ecord it as R

P .

Path-based rules are not a single behavior but a group behavior.

hey need to cooperate with each other to implement the behavior

f the packet in the network. For instance, the routing of a packet

equires a set of path-based rules to cooperate with each other. In

efinition 2 , we first defined a path-based rule set. In this set, each

ule has the same match space and the same type of instruction

line 2 and 3 in Definition 2 ). Here, the operator ↔ means the two

nstructions are equivalent, e.g., forward to a port or set the same

ueue. Besides, the locations of these rules form a loop-free path,

hich is from an ingress switch and to an egress switch (line 4–7

n Definition 2 ).

efinition 3 (Node-based rule R

N ) . We assume that RS is a set

f all rules in a feasible forwarding path, and the conditions are as

ollows:

• RS [ M] =

R i ∈ RS { R i [ M] } • R

N [ M] ∈ RS [ M]

• R

N [ M ] ⋂

( RS [ M ] − R

N [ M]) = ∅

• ⋃

R

N = RS − ⋃

R

P

We then define R

N as a node-based rule.

Different from path-based rules, node-based rules can imple-

ent the packet behavior alone. In Definition 3 , we use the match

pace element to identify a node-based rule. In any feasible for-

arding path, the match space of a specific node-based rule R

N

s different from the match spaces of other node-based rules (line

of Definition 3 ). While all node-based rules constitute the dif-

erence set of the complete set of rules ( RS ) and path-based rule

et in the forwarding path (line 4 of Definition 3 ). For instance,

n access control list (ACL) rule (10.10.0.2/24, Drop , 10, S 1 , 0) is a

ode-based rule, and it defines a complete behavior of the packet

n the network, i.e., drop the packets whose destination IP address

s 10.0.0.2. Thus, each node-based rule is unique.

In our scheme, a rule is either a path-based rule or a node-

ased rule. To achieve this, IRMS pre-processes all rules before

Page 5: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

L. Wang et al. / Computer Networks 144 (2018) 77–88 81

Fig. 2. IRMS architecture.

t

o

w

t

i

a

I

i

t

b

(

c

I

p

b

w

4

o

T

i

s

i

t

v

a

f

[

a

a

o

l

T

o

c

e

b

O

d

c

b

o

s

t

i

l

t

i

c

m

T

a

r

T

n

m

w

t

t

s

p

(

t

t

a

a

e

t

t

a

s

I

o

n

W

m

c

f

l

n

l

t

i

d

t

s

r

p

r

t

i

T

a

t

P

s

r

T

n

i

b

t

this, we can get that num o = km ≥ k + tm = num n . �

hey are installed. Rules that contain both path forwarding and

ther network functions will be separated. For instance, in the net-

ork shown in Fig. 1 , there is a traffic engineering application be-

ween the switch I 1 and E 2 . Flows whose destination IP address

s 192.168.5.0/24 will follow the path I 1 → S 2 → S 4 → S 7 → S 8 → E 2 ,

nd other flows will follow the path I 1 → S 3 → S 6 → S 8 → E 2 . In

RMS, the rules are as follows: 1) path-based rules that will be

nstalled on the switches of the two forwarding paths, e.g., on

he switch S 4 , the rule is (l abl e = 1) → forward port 2. 2) node-

ased rules that will be installed on the ingress switch I 1 , i.e.,

192.168.5.0/24) → set label as 1 and others → set label as 0. We

an use an unused match field for the label, such as VLAN or MPLS.

n this instance, node-based rules only focus on the path decision

olicy and the detailed path information is included in the path-

ased rules. The separation of forwarding behavior and other net-

ork function is the basis of our design.

.2. Framework design

IRMS employs different solutions to handle the two types

f rules to achieve the goals mentioned above ( Section 3.2 ).

he framework of an SDN network applying IRMS is illustrated

n Fig. 2 . IRMS installs all path-based rules on the required

witches proactively while installing node-based rules on the

ngress switches reactively. That means all candidate paths be-

ween ingress switches and egress switches are computed in ad-

ance, and the corresponding forwarding rules are also installed in

dvance. IRMS treats the path-based rules that belong to the same

orwarding path as a whole.

To achieve this, we use a framework that comes from the SOL

18] to compute all feasible paths offline. This framework collects

ll the requirements from the upper-layer network applications

nd the network states, and takes them as the constraints of an

ptimization problem. These constraints include network topology,

ink capacities, routing requirements and way-points enforcement.

hrough solving this optimization problem, we can get a near-

ptimal solution to the set of feasible paths. When these paths are

omputed, the related path-based rules are determined. We assign

ach group of path-based rules a valid label as the match space.

IRMS also designs several novel management policies for node-

ased rules to achieve the flexibility and high-performance goals.

ur scheme partitions the node-based rules into several non-

isjoint chunks to overcome the rule management challenges asso-

iated with dependencies between rules, and distribute the node-

ased rules in the chunk uniformly. Besides, we design an online

ptimization algorithm to improve the cache hit rate through in-

talling multiple chunks at a time.

To reduce the overhead of management for the rules, we also

ake up different update policies for the two types of rules. Accord-

ng to the Definition 2 , since the path-based rules have the same

ife cycle, IRMS uses a lazy update policy for them. Moreover, for

he node-based rules, our scheme takes an immediate update pol-

cy. The detailed discussion on the update is in Section 6 .

Recalling the subproblems of rule management that we dis-

ussed in Section 3.1 , IRMS is required to design four key modules:

anager module, install module, monitor module and update module .

he install module speaks southbound protocols (e.g., OpenFlow

nd Netconf) with the data plane devices and installs the related

ules.

The monitor module is responsible for network state collection.

he update module handles scheduling of the update policy and

otifying the details to the install module. The brain of IRMS is the

anager module. It enforces all computation tasks and interacts

ith other modules and the rule database.

When a Packet-In event happens, our system sends the packet

o different modules according to its “eth-type”. For example, if

he packet is an LLDP packet (i.e.,“eth_type”: 35020), it will be

ent to the monitor module, and it will be processed by an ARP

roxy function of the manager module while it is an ARP packet

i.e.,“eth_type”: 2054). For other types, the packet will be sent to

he install module. This module will notify the manager module

o search the rule database to find the matching node-based rule

nd the chunk to which it belongs. The monitor module maintains

nd updates the global topology according to the topology discov-

ry protocol, such as LLDP or BGP-LS, and detects the change of

he network. The update module collects all the change informa-

ion from the topology module, generates the update information

nd sends them to the install module and the manager module.

To understand this architecture, we consider a simple example

hown in Fig. 2 . Path-based rules for two forwarding paths ( path 1 :

S 1 , CS 1 , CS 3 , ES 1 and path 2 : IS 1 , CS 2 , ES 1 ) are already installed

n the switches. A load balancing policy based on the source IP

etwork address is (10.10.2.128/25 → path 1, 10.10.2.0/25 → path 2).

hen a packet with source IP address 10.10.2.233 comes in and

iss-matches in IS 1 , a Packet-In message will be forwarded to the

ontroller. The management module then fetches the rule chunk

rom the database and notifies the install module to send the re-

ated rule chunk in IS 1 . To guarantee the correct flow behavior, the

ode-based rule chunk must contain the rule (10.10.2.128/25, set

abel = 1, IS 1 , 5). According to the cache policy, it is possible to con-

ain the rule (10.10.2.0/25, set label = 2, IS 1 , 5) and other neighbor-

ng rules. The chunk partition algorithm and cache policy will be

iscussed in detail in Section 5 .

To ensure resource-saving, our scheme is shown not to increase

he number of rules as long as common paths are enough. It is

traightforward that IRMS does not increase the total number of

ules as long as there are at least 1/ m of common paths, where the

arameter m is the average length of the forwarding paths. Recent

esearch work SOL [18] also shows that in many practical scenarios,

he number of valid paths is likely to be very small. Thus, we can

nfer that IRMS can decrease the total number of rules significantly.

heorem 1. IRMS does not increase the total number of rules as long

s there are at least 1/ m of common paths, where the parameter m is

he average length of the forwarding paths.

roof. For the basic node-based rules, the number remains the

ame so that we may omit this kind of rule. For the path-based

ules, we assume that there are k groups of rules and t valid paths.

hus the number of original rules is num o ( ≥ km ). The number of

ewly generated node-based rules (i.e., the rules for path selection)

s equal to the number of groups ( k ), while the number of path-

ased rules is tm . Thus the number of all rules is num n = k + tm . If

here are at least 1/ m of common paths, i.e., k − t ≥ 1 /m × k . From

Page 6: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

82 L. Wang et al. / Computer Networks 144 (2018) 77–88

Fig. 3. The framework of an SDN data path applying IRMS.

p

l

w

r

w

c

(

f

i

i

c

c

m

s

j

w

c

t

T

t

T

P

S

S

S

4.3. Data plane design

In order to support IRMS, we design three level logic flow tables

as illustrated in Fig. 3 . The rules in the first level of the logical table

are the chunk-matching rules. The second level of the logical table

contains the node-based rules whose chunks are installed. In prac-

tice, this level of the logical table may include an internal pipeline

to support more complicated matching logic. The last level of the

logical table contains all of the path-based rules whose start point

is the switch.

Fig. 3 shows that if the packet matches the chunk rule in ta-

ble 0, it will go to the next table to find the precise node-based

rule(s). In the second level of the logical table, the packet will be

set a label in a certain non-used field (e.g., VLAN, MPLS) to indi-

cate the forwarding path. At the last stage, the packet matches the

path-based rule according to the previous label. In IRMS, manag-

ing the labels in one hop is sufficient to manipulate all forwarding

behaviors of the whole network. It is noted that the data plane de-

sign is supported by current standard SDN switches and needs no

data plane modification.

5. Key algorithms

5.1. Chunk partition problem

According to the design mentioned above, node-based rules are

required to be divided into a number of chunks. We formulate this

problem as follows. For a given set of N R node-based rules with

K match fields, we would like to partition the flow space into sev-

eral chunks. Each chunk has a maximum capacity of M C rules. Each

chunk will be a regular hypercube, because hypercubes are easy

to represent as wildcard partition rules. The optimization objective

is focused on generating as few chunks and multi-chunk rules as

possible.

We define a cost function as the optimization objective, and

the cost has two parts: the number of chunks and the number of

multi-chunk rules. We assume that the partition solution generates

n chunks. Therefore, the total number of rules is N R + n, i.e., the

original N R node-based rules and n newly generated rules that are

used to match the chunks. The number of chunks is equal to newly

generated chunk-matching rules, which can be considered as the

overhead of the partition. With regard to updates, the multi-chunk

rules occupy more space on the switch than single-chunk rules,

since they are removed only if all their associated chunks have ex-

ired. That means if the number of the multi-chunk rules ( �) is

arge, many rules will remain in the data plane for a long time,

hich significantly affects the flexibility of rule management. As a

esult, a chunk update operation only replaces few TCAM spaces,

hich will increase the burden on the flow table. To represent the

ost, we normalize the two indicators and employ a positive value

λ) to adjust the weight. Although the selection of lambda is af-

ected by the traffic pattern, we set it as “1” at most of the scenar-

os, and it means that the two indicators are considered as equally

mportant in our scheme. In practice, the administrator of the IRMS

an tune the parameter according to the traffic pattern and con-

rete requirements of network management.

in Cost =

n

N R + n

+ λ�

N R + n + �(1)

.t. � =

N R ∑

i =1

(

n ∑

j=1

p i, j − 1

)

(2)

N R ∑

i =1

p i, j ≤ M C ∀ j ∈ 1 , 2 . . . n (3)

(M C + 1) ≤ M S (4)

n ∑

j=1

p i, j ≥ 1 ∀ i ∈ 1 , 2 . . . N R (5)

p i, j ∈ { 0 , 1 } (6)

In the constraints, p i, j is a rule including indicator for the chunk

. We set this to 1 if chunk j contains the rule R i , and 0 for other-

ise. (3) shows the chunk size constraint, and means that for any

hunk j , it accommodates up to M C rules. Meanwhile (4) shows

hat the switch whose capacity is M S can hold at most ϖ chunks.

he last constraint denotes that each node-based rule must belong

o at least one chunk.

heorem 2. The chunk partition problem is NP-Complete.

roof.

tep 1 (Problem transformation): Since this is an optimization

problem, there must exist an equivalent decision problem.

This decision problem is such that given a partition cost

(Cost_c), whether or not there exists a solution, which sat-

isfies all of the constraints above, and with the cost less

than or equal to Cost_c.

tep 2 (NP problem proof): For every n, the solution is a Boolean

array ( P = [ p i, j ] N R ×n ). The target function is a polynomial

function of the solution. Thus, we can verify whether the

Cost is valid (or not) in polynomial time.

tep 3 (NP-hard proof): To prove this problem is NP-hard, we

show that Bin packing [19] ≤ p chunk partition, i.e., we

need to show how to reduce any instance of the Bin Pack-

ing to an instance of the chunk partition in polynomial

time. Suppose the bin is the chunk and the item is the rule.

We can find that the constraint of Bin Packing ( ∑ n

j=1 p i, j =1 ) is subset of (6) and other constraints are same. There-

fore, a valid solution in the bin packing problem also satis-

fies the chunk partition problem. This means that Bin Pack-

ing can reduce the chunk partition problem in polynomial

time.

Page 7: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

L. Wang et al. / Computer Networks 144 (2018) 77–88 83

A

s

s

t

e

n

c

t

a

t

l

n

r

n

T

E

t

c

l

u

m

t

c

o

I

(

c

w

t

n

o

5

i

t

b

m

Fig. 4. A simple example for Algorithm 1 , set chunk size ≤ 3.

s

i

o

K

i

c

m

s

h

I

U

t

t

d

t

t

s

b

t

a

(

Considering all the three steps, we can conclude that the

Rule Space Partition problem is NP-Complete.

lgorithm 1 Intelligent partition algorithm.

1: Initialization: k = 0 . T R 0 is the tree node which represents the

entire rule space.

2: Pick a tree node T R k to split. T R k is a leaf node in the tree that

contains morethan M rules in its hypercube. If we cannot find

such a node, stop.

3: For each dimension, try to split the rule space in this dimen-

sion into 2 parts and record the number of non-multiple chunk

rules in the partitions. Choose the dimension that has the max-

imal number.

4: Select a flow match-field dimension i that has a maximum

number ofnon-multiple chunk rules from the candidate match

fields.

5: Put all child nodes of T R k in the tree. k ← k + 1 . Go to step 2.

6: Traverse the tree in pre-order, and label the leaf nodes as

chunk numbers. Use a hash table to record all the chunks be-

longing to each rule.

Algorithm design: Considering the computation cost, we de-

ign a heuristic algorithm for the chunk partition problem and

how the pseudo code in Algorithm 1 . First, we employ a decision

ree to partition the rules. The root node of the tree represents the

ntire rule space. In each round of the partition, we pick a leaf

ode in the decision tree that has more than M rules. The prin-

iple for splitting dimension selection is that we always choose

he dimension of the maximal number of non-multi-chunk rules

fter the partition. After the partition, we put all child nodes of

his splitting node in the tree, and the child nodes are the new

eaf nodes. We repeat these steps until all leaf nodes have a valid

umber. In the end, we traverse the tree, collect all leaf nodes and

ecord the node-based rules in these nodes. In our algorithm, a leaf

ode in the decision tree is actually to a valid chunk.

We use a simple example in Fig. 4 to illustrate our algorithm.

he match space of this scenario contains 4 match fields, i.e., F 1 ∼ 4 .

ach match field has a corresponding range of values, and we show

he detail of these rule in the table of Fig. 4 . In the first round,

ut the whole range of F 2 can get a valid leaf node which has the

argest number of non-multiple chunk rules (i.e., R 5 and R 8 ). We

se similar policy to partition the candidate nodes by selecting the

atch field F 2 and F 3 in the following 2 rounds, and get a valid

ermination. Thus, these node-based rules are partitioned into 4

hunks, and the number of rules in each chunk is at most 3.

The execution time of Algorithm 1 is proportional to the height

f the decision tree, which is proportional to the partition rounds.

n each round, the algorithm checks all the possible match fields

we assume the number of possible match fields is K ). Thus, the

omplexity of Algorithm 1 can be roughly thought of as O ( Kh TR ),

here h TR is the height of the tree. Although h TR is affected by

he rule pattern, it can be approximated as the upper bound of the

umber of partition rounds ( O ( log 2 ( N R ))). Therefore, the complexity

f Algorithm 1 is O ( Klog 2 ( N R )).

.2. Online optimization

After the partition, the basic scheduling unit for our scheme

s the chunk. In practice, considering the chunk correlation and

he switch capacity, we can optimize the cache problem for node-

ased rules further, i.e., at each table-miss event, we can install

ore than one chunk.

We formulate the cache optimization problem as follows. As-

ume that there are w rule chunks that are distributed on a certain

ngress switch. According to the previous assumptions, the TCAM

f a switch can hold at most ϖ chunks, and each rule has at most

match fields. We have the historical traffic matrix TH . For a given

ncoming traffic vector TC , we would like to get an optimal rule

hunk installing array { C M } 1 ×N T .

in N U =

N T ∑

i =1

I i + κU (7)

.t. (U i , MT i ) ← C M i + MT i −1 (8)

i ∈ T C i (9)

i =

{1 Match (h, MT i ) = ∅

0 otherwise (10)

=

N T ∑

i =1

U i (11)

i ∈ 1 , 2 . . . N T (12)

Among the formula (7–12), the optimization objective denotes

he interaction cost between the controller and the switch. It con-

ains two aspects: the table miss cost (sum of the I i ) and the up-

ate cost. We choose a higher weight ( κ > 1) for the update due

o the rule replace process consumes more. I i indicates whether

he incoming flow will match the current table. Formula (8) de-

cribes the chunk installing process. In each round, the chunk to

e installed and the current flow table state ( MT i −1 ) determine

he next round of flow table state ( MT i ) and update cost ( U i ). We

dd the update cost in each round and get the total update cost

U ) in Eq. (11) .

Page 8: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

84 L. Wang et al. / Computer Networks 144 (2018) 77–88

t

(

s

t

l

m

t

f

m

t

f

m

b

d

p

t

d

t

m

t

fi

i

f

r

a

u

c

t

7

a

d

t

H

t

r

o

t

7

O

a

c

C

[

m

fi

o

t

l

i

c

a

a

r

M

P

t

n

a

s

For the cache optimization problem, we mainly tackle with

two subproblems. 1) How many chunks we should install at one

time? 2) Which chunk(s) should we choose to install? The current

method is that we only install the matching chunk. However, the

chunk partition algorithm focuses more on the uniform distribu-

tion of the rules, and the size of a chunk is usually not very big

due to the cost of the update. Besides, the locality feature for the

traffic is not always consistent with the rule space partition. There-

fore, we will benefit from installing more chunks at one time when

the remaining TCAM of the switch is large enough.

Cache optimization policy: Because we cannot get the traffic

array TC at real time, the historical traffic matrix TH is employed

to make a prediction. The chunk for our selection follows the prin-

ciple:

s _ chunk = MAX i { p(chunk i | match _ chunk i −1 ,

match _ chunk i −2 , . . . . . . } (13)

If the selected chunk is already installed, we choose the chunk

with the second highest probability. In theory, many state-of-the-

art methods can be used to calculate the probability based on the

historical traffic. In our scheme, we assume the incoming chunk

chain is a Markov chain, i.e.,

s _ chunk = MAX i { p(chunk i | match _ chunk i −1 } (14)

As we only need the matching chunk information for the traf-

fic, we add the timestamp information in the cookie filed of the

chunk-matching rule in the data plane. The controller periodically

polls the first-level logic flow table of the data plane to obtain the

information of corresponding chunk matching rule. According to

the time stamp in the cookie, the sequence of chunks based on

matching time can be obtained. This sequence can be used to per-

form corresponding probability statistics.

To determine the appropriate number of chunks, we design a

sliding window mechanism. Initially, we set an initial install num-

ber ( α) and a threshold value ( σ ). In our scheme, we set α as 1. σshows the bound of TCAM occupancy rate of the switch. When the

occupancy rate of TCAM ( θ ) is under σ , the window size will be

increased by 1 when a new table-miss event happens. Otherwise,

the number of window size will decrease to half of the current

value. We set the lower bound of the install number as 1. The pol-

icy is shown in formula (15) .

w _ size n =

{

α n = 1

w _ size n −1 + 1 n > 1&& θ ≤ σMAX { 1 , 0 . 5 × w _ size n −1 } otherwise

(15)

Furthermore, if the newly installed chunk is replaced in pre-

vious rounds, we will increase the timeout value according to

the replace frequency ( f ) and interval time ( interval ) as formula

(16) shown. Here T denotes the baseline of the interval time, and

τ denotes the baseline of the compensation time for frequently re-

placed chunks.

imeout = t imeout init + ( f + T /interv al) × τ (16)

6. Rule update

In our scheme, all the original rules are transformed into node-

based rules and path-based rules. Besides, we add the chunk-

matching rules to support our reactive mode for caching more

node-based rules. Actually, chunk-matching rules can also be in-

cluded in the node-based rules according to Definition 3 . As these

two types of rules have different life cycles and behavioral charac-

teristics, we should take up different policies for the two types.

Lazy update policy for path-based rules : Path-based rules rep-

resent a valid forwarding path and are usually stable in the net-

work. There are mainly two factors that can affect these rules:

1) modification of the policy constraints; (2) changes in network

tatus, e.g., a link or switch is down. In order to have an as lit-

le impact on existing network operations as possible, IRMS use a

azy update policy for path-based rules. In our system, the monitor

odule is responsible for collecting network status changes. When

his module discovers a failed link or node, it will find all the af-

ected paths and their corresponding labels and notify this infor-

ation to the update module. In the manager module, it just sets

he label(s) disable, and reassigns the appropriate label to the af-

ected rules. IRMS do not update the affected path-based rules im-

ediately. Instead, the update module recycles the disabled path-

ased rules uniformly over a fixed time cycle and installs the up-

ated path-based rules.

Immediate update policy for node-based rules : Different from

ath-based rules, node-based rules update immediately whenever

hey need to be modified. During the update process, the up-

ate module first checks all chunks which the rule belongs to in

he rule set database. For the uninstalling chunk, the management

odule just refreshes the rule and updates the database item. For

he installed chunks, we should update them immediately. The dif-

culty is the consistency. Fortunately, all of the node-based rules

n our scheme are installed on the edge switches and do not af-

ect other rules which belong to other chunks. That means we just

ecycle the node-based rules, find out the corresponding chunk(s)

nd re-install them on the edge switch(es). The drawback of this

pdating policy is that it may make the initially allocated rule

hunks lose the corresponding balance. In our simulation, we valid

hat this drawback is tolerable and affects little on the update time.

. Evaluation

In this section, we use a group of topologies and synthetic rules

nd traffic to evaluate our scheme. To be specific, to verify the

esign goals in Section 3.2 , we evaluate four central questions in

his section: (1) How high performance and flexibility is IRMS? (2)

ow well do our chunk partition algorithm and online caching op-

imization mechanism work in handling large sets of node-based

ules? (3) How efficient is our hybrid update policy compared with

ther popular schemes? (4) How many resources are consumed for

he overhead of our system?

.1. Simulation setup

To evaluate our scheme, we implement a prototype of IRMS.

ur system pre-computes feasible paths in the network based on

ll policy constraints. As this computational task is off-line, IRMS

an take advantage of existing linear programming tools such as

PLEX [20] and existing path pre-computing framework from SOL

18] and assign the appropriate match-field labels. The experi-

ents in this paper use VLAN labels. In fact, other unused match

elds can be used as the labels. IRMS installs all path-based rules

nto the 6th-level flow table of the data plane in advance to meet

he previous design of the multi-level logic pipeline. We use a

ightweight NoSQL database to store all of the rules. Before we

mport the rules into the database, the rules need to be prepro-

essed. This preprocessing contains partitioning node-based rules

nd transforming the rules into specified JSON format. We use Ryu

s the controller and the Open vSwitch as the data plane switches

unning on a server with an 8-core 2.8 GHz Intel CPU and 64GB

emory. All the results reported here are made by running the

ython code using PyPy [21] to make the code run faster.

Topology : Although most of the interactions between the con-

roller and the switches in our scheme occur at the edge of the

etwork and the topology affects little on the performance, we

lso create three typical topologies to verify the advantages of our

cheme with respect to the reduction in the number of rules and

Page 9: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

L. Wang et al. / Computer Networks 144 (2018) 77–88 85

Fig. 5. Total number of rules.

t

e

e

h

R

c

w

t

i

r

C

t

b

i

t

t

b

t

d

t

t

t

l

a

c

t

7

t

I

m

b

p

t

d

s

n

m

n

T

2

t

r

i

fl

[

i

o

t

p

b

t

i

s

r

i

s

a

a

t

v

c

s

b

T

p

a

T

n

I

C

t

r

l

a

t

m

a

n

t

d

t

a

f

s

t

n

w

v

t

s

p

h

a

o

W

s

he utilization of switch flow tables. We use Mininet [8] to gen-

rate the topologies: (1)a small topology that has 5 nodes and 5

dges (shown in Fig. 2 ); (2)a medium topology that is a 8-Fat Tree

aving 20 nodes and 48 edges; (3) a big topology (i.e., AS209) from

ocketfuel [22] that has 58 nodes and 108 edges.

Rules : Ideally we would like to evaluate IRMS based on poli-

ies and user traces from real networks. Unfortunately, most net-

orks today are still configured with rules that are tightly bound

o their network configurations (e.g., IP address assignment, rout-

ng, and VLANs). Therefore, we evaluated our scheme by synthetic

ules with 5 fields that are generated by the open source data set

lassBench [23] to explore IRMS’s benefits across various settings.

In our simulation, we use the ACL seeds as the parameter files

o create 2 k ∼ 10 k synthetic rules each having 5 fields. As the feasi-

le paths are computed already, we distribute these rules to exist-

ng paths and the “Drop” action uniformly through a hash function.

Traffic : We also generate a host for each ingress/egress switch

o send or receive traffic by Mininet. Each host is a Linux container,

hus the traffic in our experiments is real network traffic generated

y our test tools. The basic trace comes from the rule is also from

he ClassBench tool — trace_generator. The trace follow a Pareto

istribution D (x ) = 1 − (b/x ) a . We set a = 1 and b = 0 . 5 to main-

ain a medium locality of reference in this distribution. For each

race, we use Scapy tool [24] to construct the packets to simulate

he flows as what we expect. We also assume that flow sizes fol-

ow a Pareto distribution. For each host, the 5-tuple (source/dest IP

ddress, source/dest port and protocol) of the packet are set in ac-

ordance with the synthetic rules. In the destination host, we use

he Sniff tool to receive the synthetic packets.

.2. Simulation results

The goal of IRMS is to provide a novel trade-off between proac-

ive rule management and reactive rule management. That means

RMS should consider both the flexibility and forwarding perfor-

ance. To show IRMS’s advantages, we should compare it against

oth the proactive and the reactive rule management scheme. The

revious scheme that we choose is CacheFlow [7] that computes

he dependency for all rules in advance and installs all rules on the

ata plane. Moreover, we choose CAB [5] as a reactive management

cheme to be compared, since it also has a cache mechanism.

Rule number : The first metric for the efficacy of IRMS is the

umber of rules in the data plane. The premise of making rule

anagement more effective is that the number of rules that should

ot be significantly increased. Additionally, the SDN rules consume

CAM in the hardware switches, and each switch contains only

k ∼ 20 K flow entries due to the cost and energy consumption of

he TCAM. That means statistics of the number of all generated

ules is necessary, and we show the results in different topologies

n Fig. 5 . We can find that IRMS can reduce more than 59.9% of

ow rules compared to CAB [5] and more than 60.8% to CacheFlow

7] ). IRMS can significantly reduce the number of rules due to

ts pre-processing of the rules, and install these path-based rules

nly once. From the three subgraphs, we can also conclude that

he more the valid paths or the longer the average length of the

ath, the more significant the reduction of rules. Besides, the num-

er of rules that are installed on a core switch is proportional to

he number of valid paths through that switch, and the number

s usually much smaller than the number of flow entries that the

witch can hold. That means the core switches are almost never

eplaced by rules because of the overload of the flow table, which

s thought to affect the performance of network communications

ignificantly.

TCAM occupancy rate : In our scheme, the pressure of rule stor-

ge is concentrated on the flow table of the edge switches, we

re required to measure the maximal TCAM occupancy rate ( θ ) of

he ingress/egress switches. In our simulation, although the open

switch that we use as the data plane switch does not have a spe-

ific limit on the number of flow entries but only on memory. We

et the upper bound of the flow entry is 10 k , and we use the num-

er of installed rules divided by the upper bound to simulate the

CAM occupancy rate. In this case, we do not consider the rule re-

lacement scenario.

As we know, in the proactive scheme like CacheFlow, all rules

re installed on the switches at the network initialization stage.

hat means the TCAM occupancy rate for CacheFlow is a fixed

umber. The results in Fig. 6 show that in all three topologies,

RMS has a TCAM occupancy rate close to CAB, but lower than the

acheflow, especially when the number of rules is large. Through

he results, our scheme may have a little higher TCAM occupancy

ate under a small rule set. Since our scheme uses a more regu-

ar chunk partition algorithm, and the online optimization mech-

nism installs multiple chunks when the TCAM occupancy rate of

he switch is low (i.e., in our experiment, we set σ = 35% ).

Flow setup/transmission : To evaluate the forwarding perfor-

ance of IRMS, we measure the worst situation of flow setup time

nd the number of flow setup requests compared with CAB and a

aive exact-match scheme like Ethane. We do not compare with

he proactive schemes (e.g., CacheFlow) because these schemes

o not require interactions with the controller during the flow

ransmission. That means although proactive rule management can

chieve a higher forwarding performance, they have lost important

eatures of software-defined networks, i.e., flexibility and respon-

iveness. Besides, the loss of forwarding performance only affects

he first packet of the flow. Thus, if the number of flow setup is

ot a large number, this loss of performance is acceptable for net-

ork communications, and the result shown in Fig. 8 confirms this

iew.

Two main factors affect the flow setup: the interactions with

he rule database and the flow-mod/packet-out process. In our

imulation, we set the send-rate of the flow as 10 0 0–10,0 0 0 flows

er second using the Scapy tools [24] . The results show that IRMS

as a tolerable flow setup time at different topologies ( Fig. 7 )

nd reduces the flow setup requests by more than one order

f magnitude ( Fig. 8 ) compared with the exact-match scheme.

e also measure the bandwidth consumption of the controller-

witch channel to evaluate the performance impairment during

Page 10: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

86 L. Wang et al. / Computer Networks 144 (2018) 77–88

Fig. 6. Maximal TCAM occupancy rate.

Fig. 7. Flow setup time.

Fig. 8. Flow setup request.

Fig. 9. Bandwidth consumption.

Fig. 10. Cache hit rate with small θ .

Fig. 11. Cache hit rate with large θ .

Fig. 12. Effective of chunk size.

7

i

l

a

W

a

0

n

p

o

the interaction between the control and data planes and get high-

performance results ( Fig. 9 ).

Overhead evaluation: We measure the memory and CPU usage

with different topologies on the same machine and evaluate the

overheads by comparing a benchmark instance that runs an L2-

learning app using the Ryu controller and the Mininet. IRMS of-

floads many resource-consuming tasks offline, such as the chunk

partition task for the node-based rules, manipulating all path-

based rules and prediction of the chunk to be installed for the on-

line optimization. Therefore the mainly overload of IRMS is from

the cost of operating the database. In our prototype, we use Re-

dis as the lightweight in-memory database. The results in Table 1

show that the increased overhead of IRMS is less than 10%.

.3. Cache mechanism evaluation

Cache hit rate : To valid the efficient of the cache mechanisms

n our scheme that includes chunk partition algorithm and on-

ine cache optimization mechanism. We measure the cache hit rate

chieved on the trace-generating through the ClassBench [23] tool.

e choose 3 different ACL seed files as the parameter files,

nd set the parameters as follows: smoothness = 10 , ad d ressscope = . 5 , applicationscope = 0 . 4 . This configuration adds more random-

ess in the trace and generates favor longer, more specific address

refixes. The traffic is generated based on these trace with the help

f our traffic generator tools.

Page 11: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

L. Wang et al. / Computer Networks 144 (2018) 77–88 87

Table 1

Resource overhead.

Topology L2-learning/CPU L2-learning/Mem IRMS/CPU IRMS/Mem

5-node Topo 4% 7% 8% 12%

4- fat tree 26% 45% 33% 54%

AS 209 33% 58% 42% 67%

Fig. 13. Update time evaluation.

r

m

e

I

i

s

f

w

e

a

u

u

r

i

r

m

p

a

C

r

a

r

p

f

s

i

m

f

7

t

f

r

e

d

c

n

p

r

o

8

(

r

F

i

t

u

p

i

A

F

R

P

J

R

To measure the cache hit rate, we should take up different met-

ics for the proactive rule management scheme and reactive rule

anagement scheme. For the latter, we can count the table miss

vents, i.e., count all Packet-In packets subtracting LLDP, ARP and

Pv6 packets. In our prototype, we also employ this metric and

mplement it in the monitor module. However, for the proactive

cheme like CacheFlow, there are no Packet-In events. The cache

or these schemes actually is hitting the unpopular rule chunk, and

e take the low bound of this metric as the table-miss metric.

We compare our scheme with CacheFlow [7] and CAB [5] to

valuate the cache hit rate. Due to the online optimization mech-

nism of our scheme, we have a higher cache hit rate than CAB

nder a low TCAM occupancy rate ( θ ≤ 10%). However, in this sit-

ation (i.e., the number of installed flow is small), the cache hit

ate of IRMS is a little lower than CacheFlow, since CacheFlow

s a proactive rule management scheme, and it improves the hit

ate traded off by sacrificing flexibility. Moreover, with the incre-

ent of installed rules, IRMS’s cache hit rate will gradually ap-

roach CacheFLow. The results in Figs. 10 and 11 show that the

verage cache hit rate of IRMS is above 80%, which is similar to

acheFlow and higher than CAB under a high TCAM occupancy

ate. Although both CacheFlow and CAB have a novel cache mech-

nism, our scheme has an approximate cache hit rate with fewer

ules installed.

Parameter sensitivity analysis As chunk size is an important

arameter in the chunk partition algorithm, we evaluate how it af-

ects our scheme. Fig. 12 presents the effect of tuning the chunk

ize on flow setup time. The larger the chunk size, the higher rate

t caches the rules. However, if a table-miss event occurs, it wastes

ore time to query the database for update. Thus, our scheme per-

orms well with a moderate chunk size (i.e., 12–18 rules).

.4. Update mechanism evaluation

Update evaluation : Update-friendly is also a significant goal of

he IRMS design. Since IRMS uses a hybrid update strategy and en-

orces different policies for the node-based rules and path-based

ules, we use the average update time to evaluate the update strat-

gy. In the experiment, we random choose 1 5 links and set them

own to generate the updated path-based rules. We also randomly

hange a group of rules in the database to simulate the update of

ode-based rules. Different from CacheFlow and CAB, lazy update

olicy for the path-based rules greatly reduce the update time. The

esults in Fig. 13 show that no matter what kind of network topol-

gy, IRMS can achieve more than 56% improvement.

. Conclusion

In this paper, we design an intelligent rule management scheme

IRMS) for SDN that separates node-based rules from path-based

ules. We label valid paths and pre-install the path-based rules.

or node-based rules, we partition them into disjoint chunks and

nstall them reactively. We keep the interaction between the con-

roller and the switches at the network edge. We used different

pdate policies for the two type of rules. The results of our com-

rehensive experiments show that our work makes a significant

mprovement in flow rule management for SDN.

cknowledgments

The research is supported by the National Natural Science

oundation of China under grant 61625203 , the National Key

&D Program of China under grant 2016YFC0901605, the R&D

rogram of Shenzhen under grant JCYJ20170307153157440 and

CYJ20160531174259309.

eferences

[1] N. McKeown , T. Anderson , H. Balakrishnan , G. Parulkar , L. Peterson , J. Rexford ,

S. Shenker , J. Turner , Openflow: enabling innovation in campus networks, ACMSIGCOMM Comput. Commun. Rev. 38 (2) (2008) 69–74 .

[2] H. Kim , N. Feamster , Improving network management with software defined

networking, IEEE Commun. Mag. 51 (2) (2013) 114–119 . [3] C. Hong , S. Kandula , R. Mahajan , M. Zhang , V. Gill , M. Nanduri , R. Wattenhofer ,

Achieving high utilization with software-driven wan, in: Proceedings of ACMSIGCOMM, Hong Kong, China, 2013 .

[4] M. Bredel , Z. Bozakov , A. Barczyk , H. Newman , Flow-based load balancing inmultipathed layer-2 networks using OpenFlow and multipath-TCP, in: Proceed-

ings of ACM HotSDN, Chicago, USA, 2014 .

[5] B. Yan , Y. Xu , H. Xing , K. Xi , H.J. Chao , Cab: a reactive wildcard rule caching sys-tem for software-defined networks, in: Proceedings of ACM HotSDN, Chicago,

USA, 2014 . [6] M. Yu , J. Rexford , M.J. Freedman , J. Wang , Scalable flow-based networking with

difane, in: Proceedings of ACM SIGCOMM, New Delhi, India, 2010 . [7] N. Katta , O. Alipourfard , J. Rexford , D. Walker , Cacheflow: Dependency-aware

rule-caching for software-defined networks, in: Proceedings of ACM SOSR,Santa Clara, CA, 2016 .

[8] N. Handigol , B. Heller , V. Jeyakumar , B. Lantz , N. McKeown , Reproducible net-

work experiments using container-based emulation, in: Proceedings of ACMCoNEXT, Nice, France, 2012, pp. 253–264 .

[9] L. Wang , Q. Li , Y. Jiang , Y. Wang , R. Sinnott , J. Wu , IRMS: an intelligent rulemanagement scheme for software defined networking, in: Proceedings of IN-

FOCOM Workshop IECCO, Atlanta, USA, 2017 .

Page 12: An intelligent rule management scheme for Software Defined …static.tongtianta.site › paper_pdf › b50556d2-581f-11e9-897a-00163e… · An intelligent rule management scheme for

88 L. Wang et al. / Computer Networks 144 (2018) 77–88

[10] M. Casado , M.J. Freedman , J. Pettit , J. Luo , N. McKeown , S. Shenker , Ethane: tak-ing control of the enterprise, in: Proceedings of ACM SIGCOMM, Kyoto, Japan,

2007 . [11] Q. Dong , S. Banerjee , J. Wang , D. Agrawal , Wire speed packet classification

without tcams: a few more registers (and a bit of logic) are enough, in: Pro-ceedings of ACM SIGMETRICS, San Diego, CA, 2007 .

[12] J.-P. Sheu , Y.-C. Chuo , Wildcard rules caching and cache replacement algo-rithms in software-defined networking, IEEE Trans. Netw. Serv. Manage. 13 (1)

(2016) 19–29 .

[13] C. Yang , Y. Jiang , Y. Liu , L. Wang , CNOR: a non-overlapping wildcard rulecaching system for software-defined networks, in: Proceedings of IEEE ISCC,

Natal, Brazil, 2018 . [14] N. Kang , Z. Liu , J. Rexford , D. Walker , Optimizing the one big switch abstraction

in software-defined networks, in: Proceedings of ACM CoNEXT, Santa Barbara,USA, 2013 .

[15] M. Moshref , M. Yu , A.B. Sharma , R. Govindan , vcrib: Virtualized rule manage-

ment in the cloud, in: Proceedings of USENIX NSDI, Lombard, UK, 2013 . [16] A.R. Curtis , J.C. Mogul , J. Tourrilhes , P. Yalagandula , P. Sharma , S. Banerjee , De-

voflow: Scaling flow management for high-performance networks, in: Proceed-ings of ACM SIGCOMM, Toronto, CA, 2011 .

[17] Y. Wang , D. Tai , T. Zhang , B. Liu , Flowshadow: Keeping update consistencyin software-based openflow switches, in: Proceedings of IEEE IWQoS, Beijing,

China, 2016 .

[18] V. Heorhiadi , M.K. Reiter , V. Sekar , Simplifying software-defined network opti-mization using sol, in: Proceedings of USENIX NSDI, Santa Clara, CA, 2016 .

[19] B. Korte , J. Vygen , Combinatorial Optimization: Theory and Algorithms, Algo-rithms and Combinatorics, 2 (20 0 0), 20 06 .

[20] IBM ILOG CPLEX v12.1: Users Manual for CPLEX, International Business Ma-chines Corporation. 2009.

[21] C.F. Bolz, A. Rigo, Memory Management and Threading Models as Translation

Aspects C Solutions and Challenges, Technical Report, Consortium, 2005 URLhttp://codespeak.net/pypy/dist/pypy/doc/index-report.html .

[22] R. Teixeira , K. Marzullo , S. Savage , G.M. Voelker , Characterizing and measuringpath diversity of internet topologies, in: Proceedings of ACM SIGMETRICS, San

Diego, USA, 2003 . [23] D.E. Taylor , J.S. Turner , Classbench: a packet classification benchmark,

IEEE/ACM Trans. Netw. 15 (3) (2007) 499–511 .

[24] P. Biondi. Scapy, a powerful interactive packet manipulation program. URLhttp://www.secdev.org/projects/scapy/ .

Lei Wang received the B.S. degree (2009) from Huazhong

University of Science and Technology, Wuhan, China, theM.S. degree (2013) from Beijing University of Technology,

Beijing, China. He is now a Ph.D. candidate in Graduate

School at Shenzhen, Tsinghua University, Shenzhen, China.His research interests include Software Defined Network-

ing, Network Function Virtualization and Cloud Comput-ing.

Qing Li received the B.S. degree (2008) from Dalian Uni-

versity of Technology, Dalian, China, the Ph.D. degree(2013) from Tsinghua University, Beijing, China; all in

computer science and technology. He is currently an as-sistant researcher in the Graduate School at Shenzhen,

Tsinghua University. His research interests include reli-

able and scalable routing of the Internet, Software De-fined Networks and Information Centric Networks.

Richard Sinnott received his M.Sc. and Ph.D. degrees in

The University of Stirling, in 1993 and 1997, respectively.He is currently director of eResearch at the University

of Melbourne since July 2010 and Professor of AppliedComputing Systems in Computing and Information Sys-

tems Department, Melbourne School of Engineering. Inthese roles he is responsible for all aspects of eResearch

(research-oriented IT development) at the University. Hehas been lead software engineer/architect on an exten-

sive portfolio of national and international projects, with

specific focus on those research domains requiring finer-grained access control (security).

Yong Jiang received his M.S. and Ph.D. degrees in com-puter science from Tsinghua University, Beijing, P. R.

China, in 1998 and 2002, respectively. Since 2002, he hasbeen with the Graduate School of Shenzhen of Tsinghua

University, Guangdong, China, where he is currently aprofessor. His research interests include Internet architec-

ture and its protocols, IP routing technology, etc.

Jianping Wu received the M.S. degree (1982) and Ph.D.

degree (1997) in computer science from Tsinghua Univer-sity, Beijing, China. He is now a Full Professor with the

Computer Science Department, Tsinghua University. In theresearch areas of the network architecture, high perfor-

mance routing and switching, protocol testing, and for-mal methods, he has published more than 200 technical

papers in academic journals and proceedings of interna-

tional conferences.