intracom telecom: sdn/nfv lab: opendaylight … · scgen-sb node1 scgen-sb node2 scgen-sb noden...
TRANSCRIPT
OpenDaylight Performance Stress Test Reportv1.3: Beryllium vs Boron
Intracom Telecom SDN/NFV Lab
www.intracom-telecom.com | intracom-telecom-sdn.github.com | [email protected]
Intracom Telecom | SDN/NFV Lab G 2017
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Executive SummaryIn this report we present comparative stress test results performed on OpenDaylight Beryllium SR0 controller against Open-Daylight Boron SR0. For these tests the NSTAT (Network Stress–Test Automation Toolkit) [1] testing platform and its externaltesting components (Multinet [2] , OFTraf [3], MT–Cbench [4], nstat-nb-emulator [5]) have been used. The test cases presen-ted in this report are identical to those presented in our previous performance report v1.2, so the interested reader is referredto [6] for comprehensive descriptions. As opposed to v1.2, in this report all test components have been executed withinDocker containers [7] instead of KVM instances.
Contents
1 NSTAT Toolkit 3
2 Experimental setup 4
3 Switch scalability stress tests 43.1 Idle Multinet switches . . . . . . . . . . . . . . . . . . . . 4
3.1.1 Test configuration . . . . . . . . . . . . . . . . . 43.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Idle MT–Cbench switches . . . . . . . . . . . . . . . . . . 53.2.1 Test configuration . . . . . . . . . . . . . . . . . 53.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . 5
3.3 Active Multinet switches . . . . . . . . . . . . . . . . . . . 53.3.1 Test configuration . . . . . . . . . . . . . . . . . 53.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Active MT–Cbench switches . . . . . . . . . . . . . . . . . 63.4.1 Test configuration, ”RPC” mode . . . . . . . . . . 63.4.2 Test configuration, ”DataStore” mode . . . . . . 63.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . 7
4 Stability tests 74.1 Idle Multinet switches . . . . . . . . . . . . . . . . . . . . 7
4.1.1 Test configuration . . . . . . . . . . . . . . . . . 74.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Active MT–Cbench switches . . . . . . . . . . . . . . . . . 84.2.1 Test configuration, ”DataStore” mode . . . . . . 94.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . 94.2.3 Test configuration, ”RPC” mode . . . . . . . . . . 94.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . 10
5 Flow scalability tests 105.1 Test configuration . . . . . . . . . . . . . . . . . . . . . . . 115.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.1 Add controller time/rate . . . . . . . . . . . . . 125.2.2 End–to–end flows installation controller time/rate 12
6 Contributors 12
References 12
1. NSTAT Toolkit
The NSTAT toolkit follows a modular and distributed architec-ture. With the term modular we mean that the core appli-cation works as an orchestrator that coordinates the testing
...
...SCGEN–NBnode 1
SCGEN–NBnode 2
SCGEN–NBnode N
SCGEN-SBnode 1
SCGEN-SBnode 2
SCGEN-SBnode N
controllernode
NSTAT node
lifecyclemanagement orchestration
testdimensioning
monitoring
reporting
sampling &profiling
OFtraf
fic
OFtraffic
OFtraffic
config traffic
config traffic
configtraffic
Fig. 1: NSTAT architecture
process. It controls the lifecycle of different testing compo-nents and also the coordination and lifecycle managementof the testing subject, the OpenDaylight controller. With theterm distributed, we mean that each component, controlledby NSTAT, can either run on the same node or on different no-des, which can be either physical or virtual machines. In thelatest implementation of NSTATwe introduced the use of Doc-ker containers [7].
With the use of containers1 we can isolate the separate com-ponents and their use of resources. In older versions of NSTATwe were using CPU affinity [9] to achieve this resource isola-tion. In Fig.1 NSTAT toolkit architecture is depicted.
1The provisioning of docker containers along with their interconnection,is achieved with the use of 1) the docker–compose provisioning tool and 2)the pre–built docker images which are present at docker hub [8]. All contai-ners were running on the same physical server.
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.3/14
SDN controller
Mininetswitch 1
...Mininetswitch 2
Mininetswitch 3
* the vast majority of packets
OF1.0/1.3 traffic
ECHO_REPLYs, MULTIPART_REPLYs∗
ECHO_REQUESTs, M
ULTIPART_REQUESTS∗
(a) Multinet switches sit idle during main operation. ECHO and MULTIPART mes-sages dominate the traffic being exchangedbetween the controller and the swit-ches.
SDN controller
MT–Cbenchswitch 1
...MT–Cbenchswitch 2
MT–Cbenchswitch 3
OF1.0
traffic∗
*, ** a few messages during initialization, idle during main execution.
OF1.0traffic∗∗
(b) MT–Cbench switches sit idle during main operation, without sending or re-ceiving any kind of messages.
Fig. 2: Representation of a switch scalability test with idle Multinet and MT-Cbench switches.
Table 1: Stress tests experimental setup.
Host operating system Centos 7, kernel 3.10.0Nodes type Docker containers (Docker version 1.12.1, build 23cf638)Container OS Ubuntu 14.04
Physical Server platform Dell R720Processor model Intel Xeon CPU E5–2640 v2 @ 2.00GHzTotal system CPUs 32CPUs configuration 2 sockets× 8 cores/socket× 2 HW-threads/core @ 2.00GHz
Main memory 256 GB, 1.6 GHz RDIMMSDN Controllers under test OpenDaylight Beryllium (SR1), OpenDaylight Boron (SR0)Controller JVM options -Xmx8G, -Xms8G, -XX:+UseG1GC, -XX:MaxPermSize=8GMultinet OVS version 2.3.0
2. Experimental setup
Details of the experimental setup are presented in Table 1.
3. Switch scalability stress tests
In switch scalability tests we test controller towards differentscales ofOpenFlow switches networks. In order to create thesenetworks we use either MT–Cbench [4] or Multinet [2]. MT–Cbench generates OpenFlow traffic emulating a “fake” Open-Flow v1.0 switch topology. Multinet utilizes Mininet [10] andOpenVSwitch v2.3.0 [11] to emulatedistributedOpenFlowv1.3switch topologies.
In our stress tests we have experimented with topology swit-ches operating in twomodes, idle and activemode: switchesin idlemode do not initiate any traffic to the controller, but rat-her respond to messages sent by it. Switches in active modeconsistently initiate traffic to the controller, in the form of PAC-KET_IN messages. In most stress tests, MT–Cbench and Multi-net switches operate both in active and idle modes.
For more details regarding the tests setup, the reade shouldrefer to the NSTAT: OpenDaylight Performance Stress Test Re-
port v1.2, [6]
3.1 Idle Multinet switches
3.1.1 Test configuration
• topology types: ”Linear”, ”Disconnected”• topology size per worker node: 100, 200, 300, 400, 500,600
• number of worker nodes: 16• group size: 1, 5• group delay: 5000ms• persistence: enabled
In this case, the total topology size is equal to 1600, 3200, 4800,6400, 8000, 9600.
3.1.2 Results
Results from this series of tests are presented in Fig.3(a), 3(b)for a ”Linear” topology and Fig.4(a), 4(b) for a ”Disconnected”topology respectively.
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.4/14
number of network switches [N]
511
1030
1563
2163
−1−1
bootup
time[s]
1600
3200
4800
6400
8000 9600
Beryllium
Boron
(a) Group size: 1, topology type: Linear
number of network switches [N]
113293
702
1363
2756
−1
bootup
time[s]
16003200
4800
6400
8000
9600
Beryllium
Boron
(b) Group size: 5, topology type: Linear
Fig. 3: Switch scalability stress test results for idle Multinet switches. Network topology size from 1600→9600 switches. Topology type: Linear. Boot–up timeis forced to -1 when switch discovery fails.
number of network switches [N]
509
1024
1545
2150
−1−1
bootup
time[s]
1600
3200
4800
6400
80009600
Beryllium
Boron
(a) Group size: 1, topology type: Disconnected.
number of network switches [N]
111
286
672
1357
−1−1
bootutim
e[s]
16003200
4800
6400
80009600
Beryllium
Boron
(b) Group size: 5, topology type: Disconnected.
Fig. 4: Switch scalability stress test results for idle Multinet switches. Network size scales from 1600→9600 switches. Topology type: Disconnected. Boot–uptime is forced to -1 when switch discovery fails.
3.2 Idle MT–Cbench switches
3.2.1 Test configuration
• controller: ”RPC” mode• generator: MT–Cbench, latency mode• number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 30,40, 50, 60, 70 , 80, 90, 100 threads.
• topology size per MT–Cbench thread: 50 switches• group delay: 500, 1000, 2000, 4000, 8000,16000.
• persistence: enabled
In this case switch topology size is equal to: 50, 100, 200, 300, 400,800, 1000, 1500 2000, 2500, 3000, 3500, 4000, 4500, 5000.
3.2.2 Results
Results of this test are presented in Fig.5(a), 5(b), 6(a), 6(b).
3.3 Active Multinet switches
3.3.1 Test configuration
• controller: with l2switch plugin installed, configured torespondwithmac–to–mac FLOW_MODs to PACKET_INmessages with ARP payload [12]
• topology size per worker node: 12, 25, 50, 100, 200, 300• number of worker nodes: 16• group size: 1• group delay: 3000ms• topology type: ”Linear”
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.5/14
102 1032224
91115
21
2630
35
41
46
51
100 400
1000
2000
50 200
800
1500
3000
4000
5000
2500
3500
4500
number of network switches [N]
bootup
time[s]
inter thread creation delay: 0.5sec
Beryllium
Boron
(a) group delay: 0.5s.
102 10312491621
31
41
51
61
71
81
91
102
100 400
1000
2000
50 200
800
1500
3000
4000
5000
2500
3500
4500
number of network switches [N]
bootup
time[s]
inter thread creation delay: 1sec
Beryllium
Boron
(b) group delay: 1s.
Fig. 5: Switch scalability stress test results with idle MT–Cbench switches. Topology size scales from 50→ 5000 switches.
102 1038163264
129161
241
321
401
481
561
641
721
801
100400
1000
2000
50 200
800
1500
3000
4000
5000
2500
3500
4500
number of network switches [N]
bootup
time[s]
inter thread creation delay: 8sec
Beryllium
Boron
(a) group delay: 8s.
102 103163264129
257321
481
641
801
961
1121
1281
1441
1601
100400
1000
2000
50 200
800
1500
3000
4000
5000
2500
3500
4500
number of network switches [N]
bootup
time[s]
inter thread creation delay: 16sec
Beryllium
Boron
(b) group delay: 16s.
Fig. 6: Switch scalability stress test results with idle MT–Cbench switches. Topology size scales from 50→ 5000 switches.
• hosts per switch: 2• traffic generation interval: 60000ms• PACKET_IN transmission delay: 500ms• persistence: disabled
Switch topology size scales as follows: 192, 400, 800, 1600, 3200,4800.
3.3.2 Results
Results of this test are presented in Fig.8.
3.4 Active MT–Cbench switches
3.4.1 Test configuration, ”RPC” mode
• controller: ”RPC” mode
• generator: MT–Cbench, latency mode• number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 40, 60,80, 100 threads,
• topology size per MT–Cbench thread: 50 switches• group delay: 15s• traffic generation interval: 20s• persistence: enabled
In this case, the total topology size is equal to 50, 100, 200, 400,800, 1000, 2000, 3000, 4000, 5000.
3.4.2 Test configuration, ”DataStore” mode
• controller: ”DataStore” mode• generator: MT–Cbench, latency mode,• number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 40,60, 80, 100 threads,
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.6/14
SDN controller
Mininetswitch 1
...Mininetswitch 2
Mininetswitch 3
* the vast majority of packets
OF1.0/1.3 traffic
PACKET_Ins∗, ECHO_REPLYs
MULTIPART_REPLYs
FLOW
_MODs∗, ECHO_REQUESTs
MULTIPART_REQUESTS
(a) The switches send OF1.3 PACKET_IN messages to the controller, whichreplies with OF1.3 FLOW_MODs.
SDN controller
MT-Cbenchswitch 1
...MT-Cbenchswitch 2
MT-Cbenchswitch 3
OF1.0
traffic
artificialPACKET_IN
s∗
* the vast majority of packets
artificialFLOW
_MODs∗
(b) The switches send artificial OF1.0 PACKET_IN messages to the controller,which replies with also artificial OF1.0 FLOW_MODmessages.
Fig. 7: Representation of switch scalability stress test with active (a) Multinet and (b) MT–Cbench switches.
number of network switches [N]
[Mbytes/s]
Beryllium
Boron
0
1
2
3
4
5
6
7
8
4800|7.31
4800|1.68
3200|2.21
3200|0.96
1600|0.79
1600|0.79800|0.34
800|0.32400|0.19
400|0.14192|0.12
192|0.08
Fig. 8: Switch scalability stress test resultswith activeMultinet switches. Com-parison analysis of controller throughput variation Mbytes/s vs num-ber of network switches N. Topology size scales from 192→ 4800 swit-ches.
• topology size per MT–Cbench thread: 50 switches• group delay: 15s• traffic generation interval: 20s• persistence: enabled
In this case, the total topology size is equal to 50, 100, 200, 300,400, 800, 1000, 2000, 3000, 4000, 5000.
3.4.3 Results
Results for test configurations defined in sections 3.4.1, 3.4.2are presented in Figs.9(a), 9(b) respectively.
4. Stability tests
Stability tests explore how controller throughput behaves ina large time window with a fixed topology connected to it,Fig.10(a), 10(b). Thegoal is todetect performance fluctuationsover time.
The controller accepts a standard rate of incoming traffic andits response throughput is being sampled periodically. NSTATreports these samples in a time series.
4.1 Idle Multinet switches
Thepurpose of this test is to investigate the stability of the con-troller to serve standard traffic requirements of a large scaleMultinet topology of idle switches.
Duringmain test executionMultinet switches respond to ECHOand MULTIPART messages sent by the controller at regular in-tervals. These types of messages dominate the total traffic vo-lume during execution.
NSTAT uses the oftraf [3] to measure the outgoing traffic ofthe controller. The metrics presented for this case are theOpenFlowpackets andbytes collectedbyNSTATevery second,Fig.15.
In order to push the controller performance to its limits, thecontroller is executed on the baremetal andMultinet on a setof interconnected virtual machines
4.1.1 Test configuration
• topology size per worker node: 200• number of worker nodes: 16• group size: 1• group delay: 2000ms• topology type: ”Linear”• hosts per switch: 1
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.7/14
number of network switches [N]
throug
hput
[respon
ses/s]
10 2 10 350000
60000
70000
80000
90000
100000
110000
120000
68647.4267052.9
106481.125
86160.75
109236.57
91807.22589492.7
79954.35
83618.975
76331.55
77676.57
74327.5572263.425
65230.85
68305.97561657.6
55723.6
59155.84
67044.675
50 100 200 800400 4000300020001000 5000
Beryllium (mean)Beryllium (min)Beryllium (max)Boron (mean)Boron (min)Boron (max)
(a) OpenDaylight running in RPC mode.
number of network switches [N]
throug
hput
[respon
ses/s]
10 2 10 30
50
100
150
200
250
300
17.4 | 50
14.8 | 50
20.07 | 100
14.2 | 100 15.075 | 200
25.87 | 20035.05 | 400
14.77 | 400 14.37 | 800
55.55 | 800
64.9 | 1000
19.35 | 100026.6 | 2000
114.775 | 2000
63 | 3000
165.05 | 3000
212.725 | 4000
106.225 | 4000
258.95 | 5000
125.375 | 5000
Beryllium
Boron
(b) OpenDaylight running in DS mode.
Fig. 9: Switch scalability stress test results with active MT–Cbench switches. Comparison analysis of controller throughput variation [responses/s] vs number ofnetwork switches N with OpenDaylight running both on RPC and DataStore mode.
SDN controller
Mininetswitch 1
Mininetswitch 2
Mininetswitch 3
* the vast majority of packets
OF1.0/1.3 traffi
c*
ECHO_REPLYs, MULTIPART_REPLYs*
ECHO_REQUESTs
MULTIPART_REQUESTS*
(a) Representation of switch stability stress test with idle Multinet switches.
SDN controller
MT–Cbenchswitch 1
MT-Cbenchswitch 2
MT–Cbenchswitch 3
* the vast majority of packets
OF1.0
traffic
artificialPACKET_IN
s∗
artificialFLOW
_MODs∗
(b) Representation of switch stability stress test with active MT–Cbench swit-ches.
Fig. 10: Representation of switch stability stress testwith idleMultinet (a) and activeMT–Cbench switches (b). The controller accepts a standard rate of incomingtraffic and its response throughput is being sampled periodically.
• period between samples: 10s• number of samples: 4320• persistence: enabled• total running time: 12h
In this case switch topology size is equal to 3200.
4.1.2 Results
The results of this test are presented in Fig.15
4.2 Active MT–Cbench switches
In this series of experiments NSTAT uses a fixed topology ofactiveMT–Cbench switches to generate traffic for a large timewindow. The switches send artificial OF1.0 PACKET_IN messa-ges at a fixed rate to the controller, which replies with also ar-tificial OF1.0 FLOW_MODmessages; these message types do-minate the traffic exchanged between the switches and thecontroller. We evaluate the controller both in ”RPC” and ”Da-taStore” modes.
In order to push the controller performance to its limits, all testnodes (controller, MT–Cbench) were executed on bare metal.To isolate the nodes from each other, the CPU shares feature
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.8/14
controller throughput [Beryllium]throug
hput
[respon
ses/sec]
repeat number [N]0 1000 2000 3000 4000 5000
0
5
10
15
20
25
30
35
(a)
controller throughput [Boron]
repeat number [N]
throug
hput
[respon
ses/sec]
1000 2000 3000 4000 500000
5
10
20
30
15
25
35
(b)
Fig. 11: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions.OpenDaylight running in ”DS” mode.
repeat number [N]
used memory Gbytes, [Beryllium]
[Gbytes]
0 1000 2000 3000 4000 5000
50
0
100
150
200
250
(a)
repeat number [N]
used memory Gbytes, [Boron][Gbytes]
0 1000 2000 3000 4000 50000
50
100
150
200
250
(b)
Fig. 12: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions.OpenDaylight running in ”DS” mode.
of NSTAT was used.
4.2.1 Test configuration, ”DataStore” mode
• controller: ”DataStore” mode• generator: MT–Cbench, latency mode• number of MT-–Cbench threads: 10• topology size per MT–Cbench thread: 50 switches• group delay: 8s• number of samples: 4320• period between samples: 10s• persistence: enabled• total running time: 12h
In this case, the total topology size is equal to 500 switches.
4.2.2 Results
The results of this test are presented in Fig.11, 12.
4.2.3 Test configuration, ”RPC” mode
• controller: ”RPC” mode• generator: MT–Cbench, latency mode• number of MT–Cbench threads: 10• topology size per MT–Cbench thread: 50 switches• group delay: 8s• number of samples: 4320,
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.9/14
controller throughput [Beryllium]
repeat number [N]
throug
hput
[respon
ses/sec]
1000 2000 3000 4000 500050000
60000
70000
80000
90000
100000
110000
120000
0
(a)
controller throughput [Boron]
repeat number [N]
throug
hput
[respon
ses/sec]
1000 2000 3000 4000 500050000
60000
70000
80000
90000
100000
110000
120000
0
(b)
Fig. 13: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions.OpenDaylight running in ”RPC” mode.
used memory Gbytes, [Beryllium]
repeat number [N]
[Gbytes]
0 1000 2000 3000 4000 5000
50
100
150
200
250
0
(a)
used memory Gbytes, [Boron]
repeat number [N]
[Gbytes]
0 1000 2000 3000 4000 5000
50
100
150
200
250
0
(b)
Fig. 14: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions.OpenDaylight running in ”RPC” mode.
• period between samples: 10s• persistence: enabled• total running time: 12h
In this case, the total topology size is equal to 500 switches.
4.2.4 Results
The results of this test are presented in Fig.13, 14.
5. Flow scalability tests
With flow scalability stress tests we try to investigate both ca-pacity and timing aspects of flows installation via the control-
ler NB (RESTCONF) interface, Fig.16.
This test uses theNorthBoundflowemulator [5] to create flowsin a scalable and configurablemanner (number of flows, delaybetween flows, number of flow writer threads). The flows arebeing written to the controller configuration data store via itsNB interface, and then forwarded to anunderlyingMultinet to-pology as flow modifications, where they are distributed intoswitches in a balanced fashion.
The test verifies the success or failure of flow operations viathe controller’s operational data store. An experiment is con-sidered successful when all flows have been installed on swit-ches and have been reflected in the operational data store ofthe controller. If not all of them have become visible in the
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.10/14
outgoing
packetspe
r[s]
outgoing packets per [s] Vs repeat number
repeat number [N]
Beryllium
Boron
1500
2000
2500
0
500
1000
4000
3500
3000
1000 20000 500040003000
(a) Comparison analysis for outgoing packets/s between OpenDaylight Beryl-lium and Boron releases.
outgoing kbytes [s] Vs repeat number
repeat number [N]
[kbytes/s]
BerylliumBoron
0 1000 2000 3000 4000 5000
50
100
150
200
250
300
0
(b) Comparison analysis for outgoing kbytes/s between OpenDaylight Be-ryllium and Boron releases.
Fig. 15: Controller 12–hour stability stress test with idle Multinet switches.
SDN controller
Mininetswitch 1
Mininetswitch 2
Mininetswitch 3
OF1.0/1.3 traffic
* the vast majority of packets
ECHO_REPLYs, MULTIPART_REPLYs∗
FLOW
_MODs, ECHO_REQUESTs∗
MULTIPART_REQUESTS*
NB app 1 NB app 2 NB app 3 ...
config traffic (REST)
Fig. 16: Representation of flow scalability stress test. An increasing numberof NB clients (NB appj , j=1,2,. . .) install flows on the switches of anunderlying OpenFlow switch topology.
data store within a certain deadline after the last update, theexperiment is considered failed.
Intuitively, this test emulates a scenario wheremultiple NB ap-plications, each controlling a subset of the topology2, sendsimultaneously flows to their switches via the controller’s NBinterface at a configurable rate.
2subsets are non-overlapping and equally sized
The metrics measured in this test are:
• Add controller time (tadd): the time needed for all re-quests to be sent and their response to be received (asin [9]).
• Add controller rate (radd): radd = N / tadd, where N theaggregate number of flows to be installed by workerthreads.
• End–to–end installation time (te2e): the time from thefirst flow installation request until all flows have been in-stalled and become visible in the operational data store.
• End-to-end installation rate (re2e): re2e = N / te2e
In this test, Multinet switches operate in idle mode, withoutinitiating any traffic apart from theMULTIPART and ECHOmes-sages with which they reply to controller’s requests at regularintervals.
5.1 Test configuration
For both Lithium and Beryllium we used the following setup
• topology size per worker node: 1, 2, 4, 35, 70, 330.• number of worker nodes: 15• group size: 1• group delay: 3000ms• topology type: ”Linear”• hosts per switch: 1• total flows to be added: 1K, 10K, 100K, 1M• flow creation delay: 0ms• flow worker threads: 5• persistence: disabled
In this case switch topology size is equal to: 15, 30, 60, 525,1050, 4950.
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.11/14
number of flows: N = 10000
number of network switches [N]
addcontrollertim
e[s]
1041031021010
5
10
15
25
30
20
Beryllium
Boron
15 | 9.02
15 | 8.24
30 | 8.47
30 | 8.15
60 | 8.61
60 | 8.09
525 | 9.75
525 | 12.18
1050 | 19.46
1050 | 11.36
1200 | 21.45
4950 | 19.28
(a) Flow operations: 104 .
number of network switches [N]
addcontrollerrate[Flows/s]
number of flows: N = 10000
1041031021010
Beryllium
Boron
500
1000
1500
2000
15 | 1213.06
15 | 1109.18
30 | 1227.62 60 | 1236.00
525 | 1026.16
1050 | 880.34
4950 | 518.80
30 | 1180.2760 | 1161.44
525 | 820.98
1050 | 513.76
1200 | 466.24
(b) Flow operations: 104 .
Fig. 17: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=104 flow operations.
number of network switches [N]
addcontrollertim
e[s]
number of flows: N = 100000
0
20
40
60
80
100
120
140
104103102101
Beryllium
Boron
15 | 67.14
15 | 65.10
30 | 68.35
30 | 66.01
60 | 68.25
60 | 67.60
525 | 96.78
525 | 73.92
1050 | 117.72
1200 | 121.54
1050 | 81.20
4950 | 143.58
(a) Flow operations: 105 .
number of network switches [N]
addcontrollerrate[Flows/s]
number of flows: N = 100000
1041031021010
500
1000
1500
2000
15 | 1536.22
15 | 1489.40
30 | 1514.84 60 | 1479.19
525 | 1352.82
1050 | 1231.59
4950 | 696.45
30 | 1463.1060 | 1465.25
525 | 1033.29
1050 | 849.47
1200 | 822.75
Beryllium
Boron
(b) Flow operations: 105 .
Fig. 18: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=105 flow operations.
5.2 Results
5.2.1 Add controller time/rate
The results of this test are presented in Figs.17, 18, 19.
5.2.2 End–to–end flows installation controller time/rate
The results of this experiment are presented in Figs.20, 21.
6. Contributors
• Nikos Anastopoulos• Panagiotis Georgiopoulos
• Konstantinos Papadopoulos• Thomas Sounapoglou
References
[1] “NSTAT: Network Stress Test Automation Toolkit.” https://github.com/intracom-telecom-sdn/nstat.
[2] “Multinet: Large–scale SDN emulator based on Mininet.”https://github.com/intracom-telecom-sdn/multinet.
[3] “OFTraf: pcap–based, RESTful OpenFlow traffic monitor.”https://github.com/intracom-telecom-sdn/oftraf.
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.12/14
15 | 1128.84
15 | 1013.41
30 | 1021.58
30 | 1268.08
60 | 1172.52
60 | 963.22525 | 1024.03
525 | 1236.87
1050 | 1575.76
1050 | 1116.14
1200 | 1611.04
4950 | 1205.91
Beryllium
Boron
0
500
1000
1500
2000
104103102101
number of flows: N = 1000000
number of network switches [N]
addcontrollertim
e[s]
(a) Add controller time vs number of switches for N=106 flow operations.
number of flows: N = 1000000
number of network switches [N]
addcontrollerrate[Flows/s]
1041031021010
500
1000
1500
2000Beryllium
Boron
15 | 986.76
15 | 885.86
30 | 788.60
30 | 978.87
60 | 1038.18
60 | 832.86 525 | 808.49
525 | 976.53
1050 | 834.61
1050 | 895.95
1200 | 620.72
4950 | 829.25
(b) Add controller rate vs number of switches for N=106 flow operations.
Fig. 19: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=106 flow operations.
number of network switches [N]
flowscontrollerinstallatio
nrate
[Flows/s]
number of flows: N = 1000
104103102101
Beryllium
Boron
0
50
100
150
200
250
300
15 | 5.12
15 | 4.72
30 | 6.78
30 | 4.36
60 | 6.14
60 | 4.88
525 | 45.91
525 | 10.57
1050 | 103.07
1050 | 21.53
1200 | 24.62
4950 | -1
(a) Flow operations: 103 .
flowscontrollerinstallatio
nrate
[Flows/s]
number of flows: N = 10000
number of network switches [N] 1041031021010
50
100
150
200
250
350
15 | 15.78
15 | 12.04
30 | 14.26
30 | 12.13
60 | 15.15
60 | 12.77
525 | 54.60
525 | 20.91
1050 | 111.95
1050 | 35.26
1200 | 83.77
4950 | -1
Beryllium
Boron
(b) Flow operations: 104 .
Fig. 20: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=104 flow operations.
[4] “MT–Cbench: A multithreaded version of the CbenchOpenFlow traffic generator.” https://github.com/intracom-telecom-sdn/mtcbench.
[5] “NSTAT: NorthBound Flow Emulator.” https://github.com/intracom-telecom-sdn/nstat-nb-emulator.
[6] “NSTAT: Performance Stress Tests Report v1.2: ”Beryl-lium Vs Lithium SR3”.” https://raw.githubusercontent.com/wiki/intracom-telecom-sdn/nstat/files/ODL_performance_report_v1.2.pdf.
[7] “Docker containers.” https://www.docker.com/what-docker.
[8] “Docker–hub Intracom–repository.” https://hub.docker.com/u/intracom/.
[9] “OpenDaylight Performance: A practical, empiricalguide. End–to–End Scenarios for Common Usagein Large Carrier, Enterprise, and Research Networks.”https://www.opendaylight.org/sites/www.opendaylight.org/files/odl_wp_perftechreport_031516a.pdf.
[10] “Mininet. An instant virtual network on your Laptop.” http://mininet.org/.
[11] “OpenVSwitch: http://openvswitch.org/.” http://openvswitch.org/.
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.13/14
number of network switches [N]
flowscontrollerinstallatio
nrate
[Flows/s]
number of flows: N = 100000
Beryllium
Boron
15|-1
15|76.48
30|207.6
30|79.44 60|80.82
60|127.52
525|146.34
525|118.87
1050|147.27
1200|153.58
1050|209.06
4950|-10
104103102101
50
100
150
200
250
300
(a) Flow operations: 105 .
number of network switches [N]
flowscontrollerinstallatio
nrate
[Flows/s]
number of flows: N = 1000000
10410310210115|-1
15|-1
30|-1
30|-1
60|-1
60|-1
525|2063.32
525|-1
1050|1883.71
1050|-11050|-1
4950|-1
Beryllium
Boron
500
1000
1000
1500
2500
3000
0
(b) Flow operations: 106 .
Fig. 21: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=106 flow operations.
[12] “Generate PACKET_IN events with ARP payload.”https://github.com/intracom-telecom-sdn/multinet#generate-packet_in-events-with-arp-payload.
OpenDaylight Performance Stress Tests Reportv1.3: Beryllium vs Boron
Date: 7th Feb G p.14/14