Towards Software Defined
Wireless Mesh Networks
Farzaneh Pakzad
B.E. (Computer Engineering, Hardware),
M.E. (Computer Engineering, Computer Systems Architecture)
A thesis submitted for the degree of Doctor of Philosophy at
The University of Queensland in 2017
School of Information Technology & Electrical Engineering
Abstract
Software Defined Networking (SDN) is a new approach to configure and manage computer
networks, based on its core idea of separating the data plane from the control plane. Through
this, SDN provides a higher level of abstraction and allows better and simpler programma-
bility and management of networks, thereby enabling faster innovation. The benefits of SDN
have clearly been demonstrated for wired networks, in particular for wide area and data
centre networks. The goal of this research was to explore how the concepts of SDN can
be applied to wireless networks, in particular Wireless Mesh Networks (WMNs). The focus
of our research was on exploring the feasibility of SDN-based routing and load balancing
of traffic in WMNs. For this, we leveraged key SDN features such as centralised view of
the network state, fine grained and flow-based traffic forwarding, and abstraction. Working
towards this goal, we have addressed a number of key research challenges. Firstly, we
have designed and implemented a new topology discovery mechanism with greatly reduced
overhead compared to the state-of-the-art approach. Topology discovery is an essential
service in any Software Defined Network, but an efficient mechanism with low overhead is
particularly important for typically resource constrained WMNs. Another critical component
for SDN-based routing in WMNs is link monitoring, in particular link capacity monitoring. In
contrast to wired networks, where the link capacity is typically known and constant, it can be
highly dynamic in wireless networks. We have designed, implemented and evaluated a new,
SDN-based link capacity estimation approach, based on the concept of packet pair/train
probing. Building on our topology discovery and link capacity estimation methods, we have
developed a new SDN-based routing framework for WMNs. For this, we have leveraged a
new northbound interface called SCOR (Software-defined Constrained Optimal Routing) for
QoS routing and Traffic Engineering (TE). Using the abstraction provided by a small number
of high level SCOR routing primitives, we have demonstrated the feasibility, simplicity and
efficiency of this approach to implement relatively complex routing problems in WMNs. Fi-
nally, we have also provided critical evaluations of new potential testbeds for the evaluation
of SDN-based WMNs. We believe this is an important contribution in its own right, since ex-
perimental validation is a key research methodology in this context, and trust in the validity
of experimental results is absolutely critical.
i
Declaration by author
This thesis is composed of my original work, and contains no material previously published
or written by another person except where due reference has been made in the text. I have
clearly stated the contribution by others to jointly-authored works that I have included in my
thesis.
I have clearly stated the contribution of others to my thesis as a whole, including statisti-
cal assistance, survey design, data analysis, significant technical procedures, professional
editorial advice, and any other original research work used or reported in my thesis. The
content of my thesis is the result of work I have carried out since the commencement of my
research higher degree candidature and does not include a substantial part of work that has
been submitted to qualify for the award of any other degree or diploma in any university or
other tertiary institution. I have clearly stated which parts of my thesis, if any, have been
submitted to qualify for another award.
I acknowledge that an electronic copy of my thesis must be lodged with the University Li-
brary and, subject to the policy and procedures of The University of Queensland, the thesis
be made available for research and study in accordance with the Copyright Act 1968 unless
a period of embargo has been approved by the Dean of the Graduate School.
I acknowledge that copyright of all material contained in my thesis resides with the copyright
holder(s) of that material. Where appropriate I have obtained copyright permission from the
copyright holder to reproduce material in this thesis.
ii
Publications during candidature
Peer-reviewed Papers:
[1] Farzaneh Pakzad, Siamak Layeghy, Marius Portmann, Evaluation of Mininet-WiFi In-
tegration via ns-3, In Proc. of the 26th International Telecommunication Networks and
Applications Conference (ITNAC), New Zealand, December 2016.
[2] Farzaneh Pakzad, Marius Portmann, Jared Hayward, Link Capacity Estimation in Wire-
less Software Defined Networks, In Proc. of the 25th International Telecommunication
Networks and Applications Conference (ITNAC), Sydney, December 2015.
[3] Farzaneh Pakzad, Marius Portmann, Wee Lum Tan, Jadwiga Indulska, Efficient topol-
ogy discovery in OpenFlow-based Software Defined Networks , Computer Communica-
tion Journal, Volume 77,Pages 52-61, Publisher Elsevier, September 2015.
[4] Farzaneh Pakzad, Marius Portmann, Wee Lum Tan, Jadwiga Indulska, Efficient Topol-
ogy Discovery in Software Defined Networks, In Proc. of the Eighth International Confer-
ence on Signal Processing and Communication Systems, Gold Coast, December 2014.
[5] Siamak Layeghy, Farzaneh Pakzad, Marius Portmann, A New QoS Routing North-
bound Interface for SDN, Australian Journal of Telecommunications and the Digital Econ-
omy, Volume 5, Pages 92-115, DOI http://dx.doi.org/10.18080/ajtde.v5n1.91, March 2017.
[6] Siamak Layeghy, Farzaneh Pakzad, Marius Portmann, SCOR: Constraint Programming-
based Northbound Interface for SDN, In Proc. of the 26th International Telecommunica-
tion Networks and Applications Conference (ITNAC), New Zealand, December 2016.
[7] Anees Al-Najjar, Farzaneh Pakzad, Siamak Layeghy, Marius Portmann, Link Capacity
Estimation in SDN based end-host, In Proc. of the 10th International Conference on Sig-
nal Processing and Communication Systems (ICSPCS), Gold Coast, December 2016.
iii
[8] Talal Alharbi, Dario Durando, Farzaneh Pakzad, Marius Portmann, Securing ARP in
Software Defined Networks, In Proc. of the 41st Annual IEEE Conference on Local Com-
puter Networks(LCN), Dubai, November 2016.
[9] Talal Alharbi, Marius Portmann, Farzaneh Pakzad, The (in)security of Topology Dis-
covery in Software Defined Networks, In Proc. of the 40st Annual IEEE Conference on
Local Computer Networks(LCN), USA, October 2015.
iv
Publications included in this thesis
Farzaneh Pakzad, Marius Portmann, Wee Lum Tan, Jadwiga Indulska, Efficient topology
discovery in OpenFlow-based Software Defined Networks , Computer Communication
Journal, September 2015. - Incorporated as Chapter 4.
Contributor Statement of contributionAuthor Farzaneh Pakzad (Candidate) Problem identification and Concept design (90%)
Methodology design (80%)Experimental evaluation (100%)Paper writing and editing (80%)
Author Marius Portmann Problem identification and Concept design (10%)Methodology design (10%)Paper writing and editing (10%)
Author Wee Lum Tan Methodology design (10%)Paper writing and editing (5%)
Author Jadwiga Indulska Paper writing and editing (5%)
Farzaneh Pakzad, Marius Portmann, Jared Hayward, Link Capacity Estimation in Wire-
less Software Defined Networks, In Proc. of the 25th International Telecommunication
Networks and Applications Conference (ITNAC),Sydney, December 2015. - Incorporated as
Chapter 5.
Contributor Statement of contributionAuthor Farzaneh Pakzad (Candidate) Problem identification and Concept design (80%)
Methodology design (80%)Experimental evaluation (100%)Paper writing and editing (80%)
Author Marius Portmann Problem identification and Concept design (20%)Methodology design (10%)Paper writing and editing (20%)
Author Jared Hayward Methodology design (10%)
v
Farzaneh Pakzad, Siamak Layeghy, Marius Portmann Evaluation of Mininet-WiFi Integra-
tion via ns-3, In Proc. of the 26th International Telecommunication Networks and Applica-
tions Conference (ITNAC), New Zealand, December 2016. - Incorporated as a section in
Chapter 6.
Contributor Statement of contributionAuthor Farzaneh Pakzad (Candidate) Problem identification and Concept design (80%)
Methodology design (80%)Experimental evaluation (100%)Paper writing and editing (75%)
Author Siamak Layeghy Methodology design (10%)Paper writing and editing (5%)
Author Marius Portmann Problem identification and Concept design (20%)Methodology design (10%)Paper writing and editing (20%)
Siamak Layeghy, Farzaneh Pakzad, Marius Portmann, SCOR: Constraint Programming-
based Northbound Interface for SDN, In Proc. of the 26th International Telecommunica-
tion Networks and Applications Conference (ITNAC), New Zealand, December 2016.- Incor-
porated as a section in Chapter 7.
Contributor Statement of contributionAuthor Farzaneh Pakzad (Candidate) Problem identification and Concept design (30%)
Methodology design (30%)Paper writing and editing (30%)
Author Siamak Layeghy Problem identification and Concept design (60%)Methodology design (60%)Experimental evaluation (100%)Paper writing and editing (60%)
Author Marius Portmann Problem identification and Concept design (10%)Methodology design (10%)Paper writing and editing (10%)
vi
Contributions by others to the thesis
A/Prof Marius Portmann, Mr Siamak layeghy, and Prof Jadwiga Indulska provided invaluable
guidance and feedback throughout the work presented in this thesis, and assisted in concept
development, system design, thesis revision.
Statement of parts of the thesis submitted to qualify for the award of another degree
None.
vii
Acknowledgements
I would like to express the deepest and most sincere gratitude to my advisory team, A/Prof.
Marius Portmann and Prof. Jadwiga Induska for all the invaluable guidance and advice they
have provided to me during my PhD studies. Without their encouragement and endless
support, I would not have been able to complete this journey. In particular, I thank my
principal supervisor, A/Prof. Marius Portmann for the space and freedom I have needed
to work, and for the continual support and guidance he has given me from day one. He
helped me in developing my logical thinking. His encouragement throughout my PhD is
invaluable. I would like to thank my committee chair, Prof. Neil Bergmann for his guidance
and support at each milestone. Special thanks also go to my colleague, Mr. Siamak Layeghy
for having valuable discussions and providing helpful feedback and assistance during my
PhD as well as his priceless friendship. Thanks my other colleagues, Mr. Talal Alharbi,
and Mr. Anees Al-najjar for their helpful feedback during our group meetings as well as
their valuable friendship. I would like to thank all the staff of the School of ITEE for their
kind assistance. I would like to acknowledge the financial support through an "Australian
Government Research Training Program Scholarship" and the University of Queensland for
the tuition fee and living allowance. I am very grateful for the chance that was given to me
to complete my studies. Thanks to the University of Queensland Graduate School for the
Graduate School International Travel Award (GSITA). I would also like to thank Prof. Walid
Dabbous, Prof. Thierry Turletti and all of their group members for hosting my visit to the
Diana project-team at INRIA, Sophia Antipolis, France. My gratitude and appreciation to Jill
Redmyre, my lovely landlady and friend, for helping me in proof reading my thesis and being
with me through the whole journey of my PhD, listening to me, advising me and for all the
happy memories. Thanks to all my friends and colleagues for their support and friendship
throughout the course of my PhD. I am so lucky to have lovely friends like you. Last but not
least I would like to thank my family who supported me in every moment of my life. I would
like to dedicate this work to my beloved parents and my lovely husband, Abuzar, who have
always been supportive throughout the course of this journey. I love you a lot mother and
father and I owe you a lot for your unconditional love, support and encouragement from a
long distance. Abuzar, you are a great partner, supporter and advisor. Thank you with all my
heart and soul.
viii
Keywords
software defined networking, sdn, wireless mesh networks, wmns, topology discovery, link
capacity estimation, testbed evaluation, mininet-ns3-wifi, wireless routing protocol, sdn-
based wmn routing
Australian and New Zealand Standard Research Classifications (ANZSRC)
ANZSRC code: 100503, Computer Communications Networks 40%
ANZSRC code: 100510, Wireless Communications 30%
ANZSRC code: 080503, Networking and Communications, 30%
Fields of Research (FoR) Classification
FoR code: 0805, Distributed Computing, 70%
FoR code: 1005, Communications Technologies, 30%
ix
Table of Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Topology Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Link Capacity Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Testbed Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.4 SDN-based Routing Framework . . . . . . . . . . . . . . . . . . . . . 6
1.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Background 9
2.1 Software Defined Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 OpenFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Wireless Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Routing in WMNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Proactive Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Reactive Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.3 Hybrid Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.4 Routing Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.5 Key WMN Routing Protocols . . . . . . . . . . . . . . . . . . . . . . . 19
3 Literature Review 22
x
Table of Contents
3.1 Software Defined Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1 Software Defined Cellular Networks . . . . . . . . . . . . . . . . . . . 23
3.1.2 Software Defined Wireless Sensor Networks . . . . . . . . . . . . . . 25
3.1.3 Software Defined Wireless Local Area Networks . . . . . . . . . . . . 27
3.1.4 Software Defined Wireless Mesh Networks . . . . . . . . . . . . . . . 28
4 Topology Discovery 36
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 SDN Topology Discovery - Current Approach . . . . . . . . . . . . . . . . . . 37
4.2.1 Controller Overhead of OFDP . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 Proposed Improvement - OFDPv2 . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.1 OFDPv2-A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3.2 OFDPv2-B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4.2 Number of Packet-Out Control Messages . . . . . . . . . . . . . . . . 48
4.4.3 Control Traffic Overhead . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.4 Impact on Controller CPU Load . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Testbed Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.5.1 OFELIA Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.6 Mininet-ns3-WiFi Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5 Link Capacity Estimation 62
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3 Packet Pair Probing in SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3.1 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
xi
Table of Contents
5.4 Packet Train Probing in SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4.1 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.4.2 Impact of Cross Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6 Testbed Evaluation 78
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.2 Mininet-ns3-WiFi Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2.2 Integration of ns-3 into Mininet . . . . . . . . . . . . . . . . . . . . . . 81
6.2.3 Experimental Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2.4 Single Link Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2.5 Link Interference Scenarios . . . . . . . . . . . . . . . . . . . . . . . . 85
6.2.6 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.3 R2Lab Testbed Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.3.1 Related Work - Wireless Testbeds . . . . . . . . . . . . . . . . . . . . 95
6.3.2 R2Lab Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.3.3 WMN Routing Experiments . . . . . . . . . . . . . . . . . . . . . . . 99
6.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7 SDN-based WMN Routing 109
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2 Software-defined Constrained Optimal Routing (SCOR) . . . . . . . . . . . . 111
7.2.1 Background: Constraint Programming . . . . . . . . . . . . . . . . . . 112
7.2.2 SCOR Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.2.3 SCOR Predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
xii
Table of Contents
7.2.4 SCOR Predicates Implementation . . . . . . . . . . . . . . . . . . . . 117
7.2.5 SCOR Framework Implementation in POX SDN Controller . . . . . . . 118
7.3 SDN-based WMN Routing Use Case 1: Least Cost Path Routing . . . . . . . 120
7.3.1 Experiment Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.4 SDN-based WMN Routing Use Case 2: Maximum Residual Capacity Routing 127
7.4.1 Experiment Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8 Conclusion 138
xiii
List of Figures
1.1 SDN-based Routing Framework . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Software-Defined Network Architecture[109] . . . . . . . . . . . . . . . . . . . 11
2.2 Flow Table Entry in OpenFlow 1.0 [109] . . . . . . . . . . . . . . . . . . . . . 13
2.3 A typical Wireless Mesh Network Architecture [31] . . . . . . . . . . . . . . . 15
4.1 LLDP Frame Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Basic OFDP Example Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Tree Topology with depth 3 and fanout 3 . . . . . . . . . . . . . . . . . . . . . 47
4.4 Fat Tree Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.5 Number of Packet-Out Messages . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.6 Bandwidth Usage of Topology Discovery . . . . . . . . . . . . . . . . . . . . . 51
4.7 Cumulative CPU Time of Topology Discovery . . . . . . . . . . . . . . . . . . 53
4.8 Cumulative CPU Time of Topology Discovery . . . . . . . . . . . . . . . . . . 54
4.9 OFELIA Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.10 Cumulative CPU Time of Topology Discovery in OFELIA Topology . . . . . . 56
4.11 Cumulative CPU Time of Topology Discovery in OFELIA and Mininet . . . . . 57
4.12 Mesh Topology in Mininet-ns3-WiFi . . . . . . . . . . . . . . . . . . . . . . . 58
4.13 Cumulative CPU Time of Topology Discovery for Mesh Topology in Mininet-
ns3-WiFi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.14 Cumulative CPU Time of Topology Discovery in Mininet-ns3-WiFi and Mininet 60
5.1 Basic Network Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
xiv
List of Figures
5.2 Link Capacity Estimation using Packet Pair Probing . . . . . . . . . . . . . . . 69
5.3 Link Capacity Estimation using Packet Train Probing (T = 40) . . . . . . . . . 71
5.4 Impact of Train Length T (d = 0m) . . . . . . . . . . . . . . . . . . . . . . . . 72
5.5 Estimation RMSE and Overhead as a Function of Train Length (T) . . . . . . 73
5.6 Impact of Forward Cross Traffic (d = 0) . . . . . . . . . . . . . . . . . . . . . 74
5.7 Impact of Reverse Cross Traffic (d = 0) . . . . . . . . . . . . . . . . . . . . . 74
5.8 Impact of Reverse Cross Traffic after Compensation (d = 0) . . . . . . . . . . 76
6.1 Experimental Platforms for Wireless Networks . . . . . . . . . . . . . . . . . 79
6.2 Integration of Mininet and ns-3 [78] . . . . . . . . . . . . . . . . . . . . . . . . 82
6.3 Basic Scenario: Single Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.4 Throughput vs. Distance in Mininet-ns3-WiFi . . . . . . . . . . . . . . . . . . 84
6.5 Throughput vs. Distance in ns-3 . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.6 Sender Interference Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.7 Sender Interference Throughput Measurements . . . . . . . . . . . . . . . . . 86
6.8 Receiver Interference Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.9 Receiver Interference Throughput Measurements . . . . . . . . . . . . . . . . 88
6.10 Scalability Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.11 Throughput and CPU Load vs. Number of Links (n) . . . . . . . . . . . . . . . 90
6.12 RTT and CPU Load vs. Number of Links (n) . . . . . . . . . . . . . . . . . . . 90
6.13 CCDF of RTT Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.14 Ground Plan Layout of Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.15 R2lab Located in an Anechoic Chamber . . . . . . . . . . . . . . . . . . . . . 97
6.16 Our Customised Mesh Topology . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.17 Throughput vs. Distance over Different Number of Hops . . . . . . . . . . . . 99
6.18 Established Route Between Node 1 as Source and Other Nodes as Destinations101
6.19 Distribution of RTT Values without Interference . . . . . . . . . . . . . . . . . 102
xv
List of Figures
6.20 Established Route Between Node 1 and Other Nodes After Applying Interfer-
ence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.21 Established Route Between Node 1 and Other Nodes After Applying Interfer-
ence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.22 Distribution of RTT Values with Interference . . . . . . . . . . . . . . . . . . . 106
6.23 PDR Value with Interference for Node 1 as Source and Other Nodes as Des-
tinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.1 SDN-based WMN Routing Framework . . . . . . . . . . . . . . . . . . . . . . 114
7.2 Wireless Real Topology with Delay . . . . . . . . . . . . . . . . . . . . . . . . 123
7.3 Comparison of SDN-based Minimum Delay Routing and Shortest Path Rout-
ing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.4 Distribution of RTT Reduction of SDN-based Routing and Shortest Path Rout-
ing per Each Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.5 Cumulative Distribution Function of Average RTT Values . . . . . . . . . . . . 127
7.6 Figure 3: Paths found for three concurrent flows from node 1 to node 7 using
a) non-disjoint, and b) disjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.7 Wireless Mesh Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.8 Aggregate Throughput vs. Number of Flows- Wireless Mesh Topology . . . . 132
7.9 Wireless Cube Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.10 Aggregate Throughput vs. Number of Flows- Wireless Cube Topology . . . . 134
7.11 Wireless Real Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.12 Aggregate Throughput vs. Number of Flows- Wireless Real Topology . . . . 136
xvi
List of Tables
1.1 Software used in Implementation and Experiments . . . . . . . . . . . . . . . 8
3.1 Software Defined Cellular Networks Solutions . . . . . . . . . . . . . . . . . . 25
3.2 Software Defined Sensor Networks Solutions . . . . . . . . . . . . . . . . . . 27
3.3 Software Defined Mesh Networks Solutions . . . . . . . . . . . . . . . . . . . 35
4.1 Software used in Implementation and Experiments . . . . . . . . . . . . . . . 46
4.2 Example Network Topologies and Key Parameters . . . . . . . . . . . . . . . 47
4.3 Number of LLDP Packet-Out Control Messages . . . . . . . . . . . . . . . . . 49
5.1 Software used in Implementation and Experiments . . . . . . . . . . . . . . . 68
6.1 Software used in Implementation and Experiments . . . . . . . . . . . . . . . 83
6.2 Host Distance Mapping for each OFDM Rate . . . . . . . . . . . . . . . . . . 87
xvii
List of Abbreviations
The abbreviated terms are provided for reference throughout the thesis.
AODV - Ad hoc On-Demand Distance Vector
AP - Access Points
BATMAN - Better Approach to Mobile Ad-hoc Network
BGP - Border Gateway Protocol
BSSID - Basic Service Set Identifier
CCDF - Complementary Cumulative Distribution Function
CDF - Cumulative Distribution Function
CP - Constraint programming
CSMA/CA - Carrier Sense Multiple Access/Collision Avoidance
CSP - Constraint Satisfaction Problem
DDR - Distributed Dynamic Routing
DSDV - Destination-Sequenced Distance Vector
DSR - Dynamic Source Routing
ETX - Expected Transmission Count
HSR - Hierarchical state routing
IGRP - Interior Gateway Routing Protocol
INPP - In-Network Packet Processing layer
INRIA - National de Recherche en Informatique et en Automatique
LLDP - the Link Layer Discovery Protocol
LPM - Link Path Membership
xviii
List of Tables
LTE - Long-Term Evolution
LVAP - Light Virtual Access Points
MAC - Media Access Control
NOS - Network Operating System
OFDM - Orthogonal Frequency Division Multiplexing
OFDP - OpenFlow Discovery Protocol
OLSR - Optimised Link State Protocol
ONF - Open Networking Foundation
OP - Optimisation Problem
OS - Operating System
OSPF - Open Shortest Path First
OVS - Open Virtual Switch
PCEP - Path Computation Element Protocol
PDR - Packet Delivery Ratio
PHY - PHYsical layer
QoS - Quality of Service
RAN - Radio Access Network
RMSE - Root Mean Square Error
RIP - Routing Information Protocol
RTT - Round Trip Time
SCOR - Software-defined Constrained Optimal Routing
SDN - Software Defined Networking
SDR - Software Defined Radio
SDWN - Software Defined Wireless Network
SLoPS - Self-Loading Periodic Streams
SNMP - Simple Network Management Protocol
TC - Traffic Control tool
xix
List of Tables
TE - Traffic Engineering
TCAM - Ternary Content Addressable Memory
TCP - Transmission Control Protocol
TLS - Transport Layer Security
TOPP - Trains of Packet Pairs
TORA - Temporally Ordered Routing Algorithm
UDP - User Datagram Protocol
USRP - Universal Software Radio Peripheral
VLAN - Virtual Local Area Network
VPS - Variable Packet Size
WAN - Wide Area Network
WCNs - Wireless Cellular Networks
WiFi - Wireless Fidelity
WLANs - Wireless Local Area Networks
WMNs - Wireless Mesh Networks
WMSDN - Wireless Mesh Software Defined Networks
WRP - Wireless Routing Protocol
WSNs - Wireless Sensor Networks
ZHLS - Zone-based Hierarchical Link State
ZRP - Zone Routing Protocol
xx
Chapter 1
Introduction
1.1 Motivation
In traditional networks, management and configuration of network elements such as routers
and switches are often complex and tedious, using a relatively low level and vendor-specific
commands. Switches and routers need to be configured individually, and there is no cen-
tralised point via which the network as a whole can be configured or programmed.
The so-called control plane, which determines how network packets are forwarded, is dis-
tributed and integrated with all the forwarding elements (e.g. router, and switches), for ex-
ample, via routing protocols. This makes it hard to configure the network and to introduce
new services [82, 79, 24, 63]. Software Defined Networking (SDN) has emerged as a new
networking concept with the key idea of removing the control intelligence from forwarding
elements such as switches and routers, and to concentrate it at a logically centralised node,
the SDN controller. Concentrating the control functionality at a centralised entity has a num-
ber of benefits. It provides a global view of the network state and allows for globally optimal
decisions to be made, such as routing.
SDN also provides a higher layer of abstraction, where the complexity of the distributed
nature of the network is hidden from the network applications. Furthermore, the Network
Operating System (NOS) provides a logically global view of the network to the network ap-
plications, and provides an API to program the network. Abstraction, as has been demon-
1
Chapter 1: Introduction
strated in other areas such as software engineering [90, 131], is a very powerful concept
that leads to reduced complexity and increased innovation.
With SDN, the network becomes more programmable and enables a higher rate of inno-
vation. New network services, applications and policies can simply be implemented via an
application running on the controller (or network operating system), which controls the for-
warding elements (data plane) via appropriate abstractions and a well-defined API, such as
OpenFlow [96]. SDN has been very successful in the academic domain for practical wired
networks, in improving flexibility, simplicity, performance and efficiency and in reducing cost.
SDN has been especially successful for data centre networks [72].
A good example of the potential of SDN is Google’s deployment of an SDN-based Wide
Area Network (WAN) connecting its data centres globally with terabits/sec of aggregate traf-
fic [72]. Replacing and complementing traditional distributed routing protocols with SDN
increased the network efficiency and link utilisation from 30-40% to close to 100%, resulting
in significant performance improvements and cost savings. The main benefits came from the
centralised view of the network and the high degree of programmability of SDN. This gave
more fine grained and agile control over the forwarding of data flows and allowed better
scheduling and load balancing. This resulted in a better utilisation of the available network
resources.
The main goal of this research is to apply the concept of SDN to wireless networks, with the
focus on wireless multi-hop networks or Wireless Mesh Networks (WMNs). WMNs have a
great potential for a wide range of application scenarios, such as public safety, transporta-
tion, mining, enterprise networks, etc. The main benefit of WMNs over traditional wireless
networks is that they do not need a wired backbone network, and can therefore be deployed
more quickly and at a lower cost.
The hypothesis is that the benefits of SDN achieved in wired networks can also be achieved
in Wireless Mesh Networks, at least partially, resulting in performance improvement, in-
creased programmability and reduced complexity. We focus on infrastructure WMNs, where
mesh nodes can be considered as static. This allows us to assume a reliable control chan-
nel between the SDN controller and the mesh nodes (SDN switches). This might not be
practical in an ad-hoc mesh network consisting of highly mobile nodes.
2
Chapter 1: Introduction
To a large extent, the performance of WMNs is determined by how packets are routed, i.e.
implemented via routing protocols in traditional networks. Therefore, routing will be a main
focus in this thesis.
Traditional WMN routing protocols such as Optimized Link State Routing (OLSR) [36] and
Ad Hoc On-Demand Distance Vector (AODV) [117], are not able to make optimal use of the
available network resources and optimally route traffic flows across the network, partially
due to the lack of a global view of the network state, as well as the support for fine grained
flow routing. Both of these limitations can be overcome by applying the SDN paradigm.
In addition, traditional WMN routing protocols are very inflexible when it comes to providing
different types of routing approaches. Very few WMN routing protocols provide support for
different routing metrics, and it is impossible or very difficult to modify a routing behaviour
of a WMN routing protocol, e.g. to cater for different QoS requirements or different network
scenarios. In this thesis, we present an SDN-based routing framework that aims to make
WMNs more programmable, via a simple, high-level interface.
However, current SDN solutions developed for wired networks cannot be directly applied to
WMNs, as we will discuss in the following. A key aspect of SDN is the logically centralised
control of network operation. For this, the SDN controller needs to have a global view of the
network state. A critical part of this is the network topology, and all current SDN controller
platforms implement a topology discovery service based on the same approach. The prob-
lem is that this approach is highly inefficient and therefore not suitable for wireless networks.
To implement SDN-based routing in WMNs, we also need to know the characteristics of the
links in the network, most importantly the link capacity. In traditional SDNs applied to wired
networks this is not a problem, since the capacity of the wired links is known and static. In
contrast, the capacity of wireless links can vary greatly, due to a number of external factors,
e.g. interference.
Consequently, we need to address these limitations and implement both a topology discov-
ery mechanism as well as a link capacity estimation method for SDN-based WMNs. Figure
1.1 shows our SDN-based routing framework for SDNs, including the "Topology Discovery"
and "Wireless Link Monitoring" modules. The information gathered by these modules can
3
Chapter 1: Introduction
Application Layer
Control Layer(Network Operating System)
Infrastructure Layer
Wireless Link
Monitoring
Topology Discovery
Simple Packet
Forwarding Hardware
Routing load balancing
Network Application
Southbound
Interface
. . .
Simple Packet
Forwarding Hardware
Simple Packet
Forwarding Hardware
Simple Packet
Forwarding Hardware
Simple Packet
Forwarding Hardware
Northbound
Interface
Figure 1.1: SDN-based Routing Framework
then be used by routing and load balancing applications. In the following section, we list
these research challenges and summarise our corresponding research contributions.
1.2 Research Contributions
1.2.1 Topology Discovery
As mentioned above, topology discovery is an essential service in SDN, and is important
for routing, load balancing, and other networking applications. Due to the typical resource
constraints of WMNs compared to wired networks, e.g. WANs or data centre networks, it is
important that the topology discovery service is implemented as efficiently as possible.
In this thesis, we present a new SDN topology discovery approach that is significantly more
efficient than the current state-of-the-art. Our approach achieves an up to 40% reduction in
controller load and up to 80% increased efficiency in network traffic overhead compared to
the current approach, while delivering identical discovery functionality. While this increased
4
Chapter 1: Introduction
efficiency is particularly critical in wireless networks, the benefits of our new approach can
also be realised in wired SDNs.
1.2.2 Link Capacity Estimation
To do optimal routing, an SDN controller needs to know the capacity of the network links.
We have come up with an efficient mechanism that can estimate the capacity of all the links,
and also provides other link information, such as delay.
While a wide range of link capacity estimation approaches exist for general wireless net-
works, such as packet pair probing [64], none of them can be directly applied to SDN-based
WMNs. Our proposed link capacity estimation method is an adaptation of the well-known
technique of packet pair/train probing. Our approach is a pure SDN implementation, which
is fully compatible with the OpenFlow southbound interface. Using SDN-specific features,
we have been able to overcome the traditional poor estimation accuracy of packet pair/train
probing under heavy cross traffic. The evaluation of our method, implemented in the Ryu
SDN controller platform, shows very good accuracy.
1.2.3 Testbed Evaluation
In order to do research in SDN and WMN, it is important to have suitable testbed environ-
ments. In this thesis, we have evaluated key testbed platforms for this purpose.
Mininet [85], a Linux-based network emulator, is widely used for Software Defined Network
experiments, due to its in-built support for OpenFlow switches. However, Mininet currently
only supports very basic emulation of wireless links. A recent work has addressed this limita-
tion by using the real-time feature of ns-3 network simulator and integrated its IEEE 802.11
channel emulation feature with Mininet. We refer to this hybrid testbed as Mininet-ns3-
WiFi [78]. While this approach has a great potential to serve as an experimental platform,
in particular for software defined wireless networks, it has not been extensively evaluated in
terms of experiment result accuracy and fidelity. This is critical for any system that integrates
5
Chapter 1: Introduction
simulation with real-time components. Our research presented in this thesis discusses both
the potential and limitations of this new testbed. We further developed a new low cost method
that gives an experimenter an indication about the fidelity and trustworthiness of the results.
Furthermore, we also evaluated the R2Lab wireless testbed platform [67] at INRIA Sophia-
Antipolis, France. This testbed has 37 customisable wireless devices in an anechoic cham-
ber for reproducible research in wireless WiFi and 4G/5G networks. This testbed has been
very recently launched, and our work presents the first initial evaluation of the testbed for
wireless multi-hop experiments, using traditional WMN routing protocols. Our results demon-
strate the potential for SDN experiments. Due to lack of time, more detailed studies of this
remains to be done as future work.
1.2.4 SDN-based Routing Framework
The previous research contributions address the key components required to implement
routing based on SDN in Wireless Mesh Networks, i.e. "Topology Discovery" module and
"Wireless Link Monitoring" module.
We have integrated these components in a complete SDN-based routing framework for
WMNs, as illustrated in Figure 1.1. By using SCOR, a new constrained programming-based
northbound interface, we show how complex routing problems can be implemented relatively
easily and efficiently. While the author of this thesis has contributed towards the develop-
ment of SCOR, the SCOR platform itself is not claimed as a contribution of the thesis. The
key aspect here is the use of the power of abstraction, which is one of the main benefits of
SDN. Via a full prototype implementation and two practical use cases, we demonstrate the
feasibility and performance of this new approach to routing in WMNs.
1.3 Methodology
The methodology used for this research is largely experimental. We have implemented a
prototype of all the key components required to implement SDN-based routing in Wireless
6
Chapter 1: Introduction
Mesh Networks, as described in the previous section. This section gives an overview of the
key tools, components, platforms and testbeds that were used for our experiments.
A basis for our prototype implementation is the SDN controller platform (or Network Oper-
ating System). There exist a large number of SDN controller platforms, such as NOX [52],
POX [94], Ryu [120], Trema [121], FloodLight [45], and Beacon [46]. We chose POX and
Ryu for our prototype implementations. They are based on Python, have all the required
features, and are widely used in the research community because of their modularity, ease
of use, support and quality. In this research, the most recent branch of POX, called dart, is
used [94].
As a platform for our experiments, we have used Mininet, a Linux-based network emula-
tor [85]. Mininet can create a network of virtual SDN switches and hosts, connected via
virtual links, supporting a wide range topologies and communication scenarios. Mininet
uses Linux namespaces to achieve lightweight virtualisation. Mininet is scalable and sup-
ports a large number of nodes on a single PC. In contrast to network simulators, the benefit
of Mininet is that it runs real networking code, which can be easily migrated to a real testbed.
We further used Open vSwitch [10], a popular virtual (software based) SDN switch with
support for OpenFlow. In addition to emulation based experiments, we have also performed
some limited experiments on a real SDN testbed, called OFELIA [37]. OFELIA is a federated
SDN testbed, distributed among a number of locations in Europe. While the use of the
OFELIA testbed has been challenging, it has allowed the validation of some of our initial
emulation based results on topology discovery.
We have also used R2Lab platform [67] located at INRIA Sophia-Antipolis, France with 37
customisable wireless devices in an anechoic chamber. Table 6.1 shows a summary of the
key tools used for our prototype implementation and experiments.
Table 6.1 provides a summary of the key software tools and testbeds used for experiments.
7
Chapter 1: Introduction
Table 1.1: Software used in Implementation and Experiments
Software Function Version
Mininet [85] Network Emulator 2.0.0 -2.2.0
ns-3 [106] 802.11 Link Emulation ns-3.25
Open vSwitch [10] Virtual SDN Switch 1.4.3
POX [94] SDN Controller Platform dart branch
Ryu [120] SDN Controller Platform 3.19
Oracle VM VirtualBox x86 Virtualization Software 4.3.10
Linux (Ubuntu) Host Operating System 12.10 - 14.04
Python Programming Language 2.7
OFELIA [37] real SDN testbed –
R2Lab [67] real Wireless testbed –
Iperf [2] Throughput Measurement 2.05
1.4 Thesis Structure
The structure of the remainder of this thesis is as follows:
• Chapter 2 provides the relevant background on Software Defined Networks, OpenFlow,
Wireless Mesh Networks and routing in WMNs.
• Chapter 3 critically reviews the most closely related works to this research, which
considered the application of Software Defined Networking to Wireless Networks and
WMNs.
• Chapter 4 presents our improved and efficient approach to SDN topology discovery.
• Chapter 5 presents our SDN-based link capacity estimation approach.
• Chapter 6 provides a systematic evaluation on the Mininet-WiFi-ns3 platform, and an
experimental validation of R2Lab, a new wireless testbed, in regards to its use for
wireless multi-hop experiments.
• Chapter 7 presents our SDN-based routing framework for WMNs, and compares its
performance with traditional WMN routing algorithms.
• Chapter 8 concludes the thesis and points to potential future work.
8
Chapter 2
Background
2.1 Software Defined Networking
Software Defined Networking (SDN) has emerged as a new networking paradigm to simplify
network management and to increase the flexibility of the network. By using a higher level of
abstraction together with clearly defined interfaces, it is easier to program the network using
high level languages [82, 24, 63]. It aims to allow faster innovation compared to traditional
networks. SDN has recently gained tremendous momentum, both in terms of commercial
activity and academic research [95, 48, 9].
SDN introduces the concept of separating the intelligence of the network from forwarding el-
ements (e.g. routers and switches) and concentrates it in a logically centralised entity called
the SDN controller. The controller does not need to be physically centralised, which would
cause scalability and reliability problems 1 [82, 79, 24]. SDN controllers can achieve a high
degree of reliability using replication of physical nodes. To address this issue, researchers
have proposed physically distributed SDN controllers, such as the Onix system [81].
The concept of centralisation of control means that the network applications such as routing
do not need to deal with the distributed nature of the underlying physical network. A so called
Network Operating System (NOS) hides and deals with the complexity of the distributed
1If some control intelligence, and how much of it, should remain in SDN switches is an issue of ongoingdebate between the various SDN proponents.
9
Chapter 2: Background
nature of the physical network, and provides an abstraction of a network graph to the network
applications.
By removing the control intelligence (e.g. routing protocols) from forwarding network devices
such as routers and switches, they become simpler and cheaper. Centralised control with a
global view of the network state also makes network configuration and management much
simpler.
Deploying new network services in a traditional IP network typically involves updating and
reconfiguring every individual forwarding element. In contrast, Implementing and deploying
new network services can be done by simply deploying a software application on the SDN
controller [82, 79, 24].
With SDN, the network becomes much more programmable and it enables a higher rate of
innovation. New network services, applications and policies can simply be implemented via
an application running on the controller, which controls the forwarding elements (data plane)
via appropriate abstractions and a well-defined API, such as OpenFlow [96]. By installing
the appropriate rules, a controller application can program SDN switches to perform a wide
range of functionalities, such as routing, switching, firewall, network address translation,
load balancing, etc. This can be done at different layers of the protocol stack. Another key
benefit of SDN is its ability to facilitate network virtualisation, e.g. via tools such as FlowVisor
[132] or OpenVirteX [20], which is essential in many deployment scenarios, in particular in
data centre applications. These benefits of SDN have resulted in a great amount of recent
industry traction, with many established and new vendors offering an increasing number of
SDN enabled switches and other devices, as well as a range of SDN controller platforms.
Figure 2.1 shows the basic architecture of SDN [109]. At the bottom is the infrastructure layer
or data plane, also called forwarding layer, consisting of a set of interconnected forwarding
elements, i.e. SDN switches. In contrast to traditional routers, SDN switches have a very lim-
ited functionality of packet forwarding and statistics gathering. The forwarding rules, which
determine how packets are being forwarded are not calculated by the switches themselves,
as is the case for traditional IP routers, but are determined by the centralised controller.
10
Chapter 2: Background
Infr
astr
uctu
re
Con
tro
l A
pplicatio
n
SDN ControllerNetwork
Services
Business Applications
SDN Switches
“Northbound Interface”
“Southbound Interface”,
e.g. OpenFlow
Figure 2.1: Software-Defined Network Architecture[109]
The next layer up is the control layer or Network Operating System (NOS), which is imple-
mented by the a logically centralised controller. One of the key services that the control layer
needs to provide is a topology discovery mechanism, to construct the network graph of the
available nodes and links in the network.
The top layer is the application layer, consisting of software programs which implement net-
work services such as routing, load balancing, firewalls, etc. [82, 79, 109]. The control
layer deals with the complexity of having a distributed system of forwarding elements. It pro-
vides a simple network abstraction as a graph with nodes (forwarding elements or switches)
and edges (network links), thus implementing a new routing mechanism or security policy
becomes a lot easier.
The interface between the control layer and infrastructure layer is called the southbound
interface. This interface allows the SDN controller to configure the individual forwarding
elements by installing forwarding rules. The most widely adopted standard southbound
interface is OpenFlow [96]. OpenFlow is a protocol that defines the communication between
the SDN controller and the forwarding elements, typically called OpenFlow switches, SDN
switches, or simply switches in this context. OpenFlow is discussed in more detail below.
11
Chapter 2: Background
The interface between the control and the application layer is called the northbound inter-
face. However, in contrast to the southbound interface, no standard has evolved for this yet.
Different controller platforms or network operating systems provide their own proprietary
version of this interface or API [82, 79, 24, 109].
2.1.1 OpenFlow
The OpenFlow protocol is a standard managed by the Open Networking Foundation (ONF),
which is "a user-driven organization dedicated to the promotion and adoption of Software-
Defined Networking (SDN) through open standards development" [9]. OpenFlow has differ-
ent versions, ranging from OpenFlow 1.0 to OpenFlow 1.5[7, 8]. However, OpenFlow 1.0
is currently the most popular version in the market and is implemented in most OpenFlow
controllers and switches.
OpenFlow [7] provides the SDN southbound interface, i.e. the interface between control
layer and the infrastructure layer in Figure 2.1. In practical terms, OpenFlow provides a
communications interface between the SDN controller and SDN switches, which allows the
controller to configure and manage the switches [82, 79, 63]. While there are other protocols
such as SNMP, BGP, PCEP, etc., proposed and employed as the southbound interface [11],
OpenFlow is currently the dominant standard.
An OpenFlow enabled SDN switch is assumed to be configured with the IP address and
TCP port number of its controller. On startup, a switch will contact its controller on the
corresponding IP address and TCP port, and establish a Transport Layer Security (TLS)
session to secure the connection.
Using an OpenFlow OFPT_FEATURES_REQUEST message, as part of the initial protocol
handshake, the controller requests configuration information from the switch, including its
active switch ports (network interfaces) and corresponding MAC addresses. We will make
use of this in our proposed topology discovery method in Section 4.3. The initial switch-
controller handshake informs the controller about the existence of the nodes (switches) in
the network.
12
Chapter 2: Background
Figure 2.2: Flow Table Entry in OpenFlow 1.0 [109]
As mentioned above, OpenFlow is a protocol that allows SDN controllers to talk to Network
Devices (switches), and instruct them on how to forward packets, i.e. by installing, deleting,
and modifying forwarding rules or flow rules in a switch’s flow table. These flow rules follow a
simple match-action pattern. The match part of a flow rule is a filter that selects packets with
specific values of specific header fields. If a packet matches a match rule, the corresponding
action part of a flow rule determines what to do with the packet. In addition, an OpenFlow
switch also collects statistics for each installed flow rule, e.g. the number of forwarded
packets, dropped packets, etc.
Figure 2.2 shows the three parts of flow rules: Matching field, Action field and statistics
[82, 79, 24, 109]:
• Matching fields provide a filter to select incoming packets with certain packet header
values. The match fields supported in OpenFlow include the switch ingress port, vari-
ous packet header fields such as MAC source and destination address, IP source and
destination address, UDP/TCP source and destination port numbers etc. Matching
fields can contain wildcards. A wide range of header fields are supported in Open-
Flow, as shown in the figure.
• The Actions field of the flow rules determines what to do with a packet that matches
the specified matching fields. Options include forwarding the packet on a specific port,
dropping it, or encapsulating and forwarding it to the SDN controller. Switch ports
defined in the forwarding rules comprise physical ports, but also include the following
virtual ports: ALL (sends packet out on all physical ports), CONTROLLER (sends to
13
Chapter 2: Background
the SDN controller via an OpenFlow Packet-In message), FLOOD (same as ALL, but
excluding the ingress port). In addition, OpenFlow also supports a number of actions
which allow the rewriting of packet headers by the switch, including the updating of the
TTL fields, adding or removing VLAN and MPLS tags, and the rewriting of MAC source
and destination addresses, etc. In this context, it is important to note that OpenFlow
does not support access to and rewriting of any packet payload.
• The Statistics field contains a set of counters the keep track of how many packets and
bytes have been handled by this rule.
Whenever a packet arrives at an OpenFlow switch, the action of the first matching flow rule
is executed, and the corresponding statistics are updated.
The OpenFlow protocol allows the controller to send a packet to a switch, together with
instructions on what to do with the packet. This is done via encapsulating the data packet
in an OpenFlow (OPFT_PACKET_OUT) message. For example, the controller can send a
packet to a switch and instruct it to send it out on a particular port. Alternatively, the controller
can instruct the switch to send the packet via the OFPP_TABLE virtual output port, which
means the packet is treated as if it was received via any of the switch’s "normal" ports, and
is handled according to the normal forwarding rules or flow tables installed on the switch.
Complementary to this, the OpenFlow (OPFT_PACKET_IN) message allows an OpenFlow
switch to encapsulate and send a data packet to the controller. This can be achieved by
specifying the CONTROLLER port as the output port in the action part of the flow rule.
Sending a packet to the controller is also the default behaviour if a switch gets a packet for
which there is no matching rule. This allows the controller to take the required action, e.g.
install a new flow rule at the switch [82, 79, 24, 109].
Since OpenFlow is the predominant SDN standard, in both industry as well as academic
research, it forms the basis for our research. The overall aim is to apply and adapt OpenFlow
based Software Defined Networking to WMNs.
14
Chapter 2: Background
Figure 2.3: A typical Wireless Mesh Network Architecture [31]
2.2 Wireless Mesh Networks
Wireless Mesh Networks (WMNs) are self-organised wireless ad-hoc multi-hop networks.
They have a great potential for a wide range of application scenarios, such as public safety,
transportation, mining, enterprise network and emergency responses. Without any need
for a wired backbone network, WMNs can be deployed quickly and at a relatively low cost.
Figure 2.3 shows a typical WMN architecture.
There are three types of nodes participating in a WMN: mesh clients, mesh routers and
gateways[19, 130]. End-user wireless devices, e.g. laptops, PCs and mobile phones are
considered as mesh clients. They have a limited battery capacity and can be mobile. Mesh
routers form the backbone infrastructure of the network and forward traffic between mesh
clients, and the gateway. In our research, we consider infrastructure WMNs, where mesh
routers are static, such as shown in the figure. Gateways are special routers, which connect
the WMN to a wired network, typically the Internet.
WMNs can be implemented using various types of radio technology, typically IEEE 802.11
(WiFi), IEEE 802.15.4 (ZigBee) or IEEE 802.16 (WiMAX) [19, 130].
15
Chapter 2: Background
The key role of mesh routers in a WMN is routing and forwarding of data packets. How this
is done determines to a large extent the overall performance and reliability of the network.
There is a large number of routing protocols that have been developed for wireless ad-hoc
networks or WMNs. The following section provides a brief overview.
2.3 Routing in WMNs
Generally speaking, routing is the problem of establishing paths in a network, based on the
network and a particular routing metric. In traditional networks, including WMNs, routing
is implemented via distributed routing protocols. The key challenge of routing in WMNs
compared to wired networks, is the more dynamic nature of the network, largely due to the
variable nature of wireless links.
In this section, we first discuss three different categories of WMN routing protocols, based
on how routes are discovered, i.e. via a proactive, reactive, or hybrid approach. We then
discuss key examples of the many existing WMN routing protocols.
2.3.1 Proactive Routing
In a proactive (or table-driven) routing protocol, paths to all nodes in the network are es-
tablished regardless of the need for a node to send any traffic. Each node builds its own
routing table based on the routing information provided by other nodes in the network. These
routing tables are updated on a regular basis, or if any changes in the network topology are
detected. Since every node has an up-to-date routing table, the path information is instantly
available when a node decides to send a packet to a certain destination. Thus, proactive
protocols typically perform better in static networks of limited size. Route maintenance is a
continuous process which puts a significant overhead on the network, especially for larger
networks, even if there is no data traffic in the network [16, 22, 122].
Key examples of WMN routing protocols include: Destination-Sequenced Distance Vector
(DSDV) [118], Wireless Routing Protocol (WRP) [99], Hierarchical State Routing (HSR) [68],
OLSR [36], and BATMAN [104].
16
Chapter 2: Background
2.3.2 Reactive Routing
Reactive (or on-demand) routing protocols have been implemented to decrease the routing
overhead caused in proactive protocols due to the continuous maintenance of the routing
information in each node. In reactive routing, a path is established at the time when node
wants to send a packet to a destination address for which currently there is no routing ta-
ble entry. In this case, a route request packet is typically flooded in the network, until it
reaches the intended destination, which will then send a route reply back to the originator
of the route request. The information gathered and stored at intermediate nodes during this
process allows the establishment of a bidirectional end-to-end path. As a result of this ap-
proach, reactive WMN routing protocols experience a route discovery delay, which might be
a problem for delay sensitive applications [16, 22, 122]. Ad hoc On-Demand Distance Vector
(AODV) [117], Dynamic Source Routing (DSR) [76], Temporally Ordered Routing Algorithm
(TORA)[115] are key examples of reactive WMN routing protocols.
2.3.3 Hybrid Routing
Hybrid WMN routing protocols aim to combine the benefits of proactive and reactive proto-
cols, which can be achieved in a wide range of ways. This typically involves maintaining
the routing information of neighbouring nodes proactively, while discovering routes to dis-
tant nodes using a reactive approach. Hybrid routing protocols can decrease the routing
overhead caused by proactive routing protocols, and can reduce route establishment delay
compared to reactive protocols [16, 22, 122]. However, this comes at the cost of increased
complexity. Zone Routing Protocol (ZRP), Zone-based Hierarchical Link State (ZHLS)[75]
routing protocol and Distributed Dynamic Routing algorithm (DDR) [105] are examples of
hybrid WMN routing protocols.
2.3.4 Routing Metrics
In this section, we briefly discuss examples routing metrics in WMNs which allow the deter-
mination of the optimal path for data transmission. These key metrics are [55, 44]:
17
Chapter 2: Background
• Hop Count (HC)
Hop count is common WMN routing metric, and it aims to establish paths between
source and destination nodes with the minimum number of hops, without consider-
ation of other factors, such as link load, link capacity, link quality, etc. Therefore, a
selected path based on this metric can have low quality and high degree of unreliabil-
ity. However, due to the simplicity, agility and stability, this metric is widely used [43].
• Expected Transmission Count (ETX)
Expected Transmission Count [39] computes the expected number of transmission and
retransmission of a unicast packet across a link to successfully reach the destination.
This metric is defined as follows for a single link:
ETX =1
d f ∗ dr(2.3.1)
Where d f is the probability of a packet successfully being delivered in the forward
direction, and dr is the probability of a successful corresponding acknowledgement of
the packet in the reverse direction. As a result, the probability that a given packet is
successfully transmitted across a link, including its acknowledgement, is obtained by
multiplication of d f and dr. The route with the lowest sum of the link ETX values will be
chosen. Therefore, if a link has a high packet error rate, it has a higher ETX value and is
therefore less likely to be part of the chosen end-to-end path. As a disadvantage, ETX
tends to choose links with low rates, which can result in a poor fairness. Moreover, ETX
does not consider link capacity and link load, and hence it cannot perform optimal load
balancing. To deal with these limitation, the ETT metric was proposed, as discussed
below.
• Expected Transmission Time (ETT)
ETT [44] aims to model the time that a given data packet needs to be transmitted
successfully, and is defined as follows:
ETT = ETX ∗ SB
(2.3.2)
S is the size of packet in bits, and B is the bandwidth of the link in bits/s. ETT has the
same properties of ETX; however, by measuring the capacity of the link it can improve
18
Chapter 2: Background
the throughput of the path and consequently the network performance. ETT and ETX
are not designed to perform in multi-radio networks as they do not have awareness of
the co-channel interference.
• Weighed Cumulative ETT (WCETT)
WCETT [44] has been proposed to improve ETT and ETX by considering channel
diversity. Using multiple channels brings up two issues that need to be dealt with;
intra-flow and inter-flow interference. The intra-flow interference is the collision of the
packets from the same flow when different nodes are transmitting the packets. The
Inter-flow interference is the interference created between concurrent flows. This metric
considers intra-flow interference. However, it cannot deal with inter-flow interference.
This metric is an end-to-end metric which is a sum of end-to-end delay and channel
diversity. WCEET can improve the network throughput; however, since it does not
consider inter-flow interference it cannot avoid highly congested paths.
A more complete discussion of WMN routing metrics is provided in [55, 44, 107, 33].
2.3.5 Key WMN Routing Protocols
In this section we give a brief overview of the most relevant WMN routing protocols, such as
BATMAN, OLSR, AODV, and DSR. We have used OLSR and BATMAN in the context our
research, as discussed in later chapters.
• Optimized Link State Routing (OLSR)
OLSR [36] is a link state routing protocol where every node maintains topology infor-
mation about the entire network. OLSR uses two types of control packets; HELLO
and Topology Control (TC) messages. HELLO messages are used by a node to find
its one-hop and two-hop neighbours. By discovering these neighbours, each node
chooses a set of Multi Point Relays (MPR) based on its one-hop neighbours advising
a best route to two-hops neighbours. MPRs aim to reduce the overhead of sending
link state information in the whole network. Nodes regularly send TC packets with the
information about their neighbourhood and the state of the links between them. MPR
19
Chapter 2: Background
nodes forward the TC packets throughout the network. Considering the information in
the TC packets, other nodes create a map of the network topology. In OLSR, nodes
use a shortest path algorithm to calculate a path towards a destination. OLSR supports
both the hop count as well as the ETX routing metric.
• Better Approach To Mobile Ad hoc Networking (BATMAN)
BATMAN is a relatively new proactive routing protocol where each node only maintains
information about the best next hop node for each destination, instead of the entire
network topology as in the case of OLSR. Consequently, BATMAN reduces the amount
of control traffic flowing in the network that can allow lower CPU usage and lower
battery consumption of the mesh nodes. The BATMAN routing protocol operates as
follows:
– In regular intervals, nodes send an OriGinator Message (OGM) including the orig-
inator IP address, forwarding node IP address, Time To Live (TTL), and Sequence
Number (SQ), to inform other nodes about the existence of this node.
– Neighbours rebroadcast the OGM messages and allow other nodes to know about
the existence of the OGM originator and so on and so forth. Therefore, OGM
messages are flooded across the network.
– BATMAN maintains a table of the number of OGM messages received from each
originator and via which one-hop neighbour the messages were received. The
one-hop neighbour via which the largest number of OGM messages from a partic-
ular originator was received is considered as the best next hop for this destination.
BATMAN is simple and therefore quite robust. It has been shown to provide good
performance compared with other WMN protocols such as OLSR [15].
• Ad hoc On-Demand Distance Vector (AODV)
AODV [117] is a reactive routing protocol that discovers routes via broadcasting Route
Request (RREQ) messages in the network. Nodes forward the RREQ in the network,
and store the source IP address and the neighbour via which it was received. This is
essentially provides a reverse route to the originator of the RREQ message. When the
RREQ message reaches its chosen destination, the node replies with a Route Reply
(RREP) message, which is now unicast back to the source, via the reverse route that
was established during the forwarding of the corresponding RREQ message. Nodes
20
Chapter 2: Background
that are forwarding the RREP also enter a routing table entry, pointing to the sender of
the RREP message. This creates the route in forward direction. When the RREP mes-
sage reaches its destination (the originator of the corresponding RRFQ message), a
bidirectional end-to-end path is established between the source and destination node.
AODV minimises the control traffic overhead of the network due to its ability to discover
the routes on demand. However, it shares the problem of increased path discovery la-
tency of all reactive protocols.
• Dynamic Source Routing (DSR)
DSR is also a reactive protocol with a similar route discovery process to AODV. A
key difference of DSR is that it uses source routing, i.e. the full path is included in
the packet header of the route discovery message. This protocol also consists of two
phases; route discovery and route maintenance. In DSR, ala source node initiates
a RREQ message with the destination address. Each node rebroadcasts the RREQ
message if it has not already received the RREQ, or if it is in the lists of the nodes that
the packet has to traverse. If a node has a route to a certain destination in its cache, it
sends a RREP message back to the source. RREP can be forwarded via the reverse
route or via an alternative route in the case that a destination has another route to the
source. If a detected link is broken, a Route Error (RERR) message is sent to delete all
routes containing the broken link. DSR reduces the control traffic overhead of updating
of the routing table in each node; however, the delay of setting up a route is higher than
proactive protocols [76].
None of the existing WMN routing protocols have a global view of the entire network and
provide high level abstractions that allow to program the network in a flexible manner, such
as enabled by SDN. Another limitation of current WMN routing protocols is the relatively
coarse grained approach to routing. All packets with the same IP destination address are
sent to the same next hop by a node. The lack of fine-grained flow-based routing, such
as supported in SDN, does not allow to implement optimal load balancing in WMNs. In
this thesis, we demonstrate how applying the SDN paradigm to WMN routing can overcome
these limitations, and can provide more flexible and programmable approach routing, that
can achieve better overall performance.
21
Chapter 3
Literature Review
There have been a number of research projects that have broadly explored the concept
of SDN for wireless networks. These works focus on a wide range of topics, ranging from
designing a new software defined data plane [25], an SDN framework for enterprise Wireless
Local Area Networks (WLANs) [136], a control plane for radio access elements in cellular
networks [53], to wireless personal networking environments based on SDN [38].
In this chapter, we first outline a general survey on wireless software defined networks and
then focus on works that are more closely related to our research. After that, we give a brief
summary of the number of proposals so far that have considered applying SDN to WMNs.
3.1 Software Defined Wireless Networks
We group the research in software defined wireless networks into different categories, and
we discuss each of them separately. First, we look at research into Software Defined Cel-
lular Networks, where the focus is on efficient resource allocation to enable a high degree
of scalability. Then, we consider the use of SDN in Wireless Sensor Networks. Here, a key
focus is on managing the limited resources of resource constrained sensor nodes. Next, we
consider the Software Defined Wireless Home Networks, where the challenges include min-
imising interference between neighbouring access points, and providing sufficient bandwidth
of demanding applications such as video streaming [58]. Finally, we present an overview of
22
Chapter 3: Literature Review
the recent research that has considered applying the SDN paradigm to Wireless Mesh Net-
works.
3.1.1 Software Defined Cellular Networks
In Wireless Cellular Networks (WCNs), the gateways, which include Serving Gateway (S-
GW) and Packet data network Gateway (P-GW), perform the functionality of the control and
data plane. The Control Plane provides functions such as establishing a connection, routing,
mobility management, and assigning radio resources, while the data plane mostly forwards
the traffic. However, the tight integration between control and data plane in these networks
has led to challenges in resource management and to scalability problems [89].
SDN, with its idea of decoupling the control plane from the data plane, introduces new poten-
tial solutions for addressing these challenges. A logically centralised controller can provide
global resource allocation and interference management, while the data plane manages the
traffic forwarding component. This has the potential of improving the network scalability and
performance [89].
SoftRAN [53] proposed a programmable Radio Access Network (RAN). RAN is at the edge
of the cellular network and provides wide area access to mobile devices. In this approach,
a set of base stations in a local area were assumed, to be combined to a big virtual base
station with the help of a software defined centralised controller. The controller performed
tasks such as handover management, transmission power allocation and uplink frequency
allocation which need coordinating with the neighbouring base stations. Other tasks such
as downlink frequency allocation, which do not need any collaboration, could be done by
the base station. SoftRAN could improve handover, interference management and radio
resource management.
In Contrast to SoftRAN, CellSDN [89], introduced a high-level design of the centralised con-
troller at the core level of cellular networks. SoftCell [73] developed this method to support
fine-grained policies in the cellular core network. However, to achieve a network scalability,
SoftCell extended both the control and data plane. In the control plane, each switch was
equipped with a local controller to classify packets in order to reduce the load on the main
23
Chapter 3: Literature Review
controller. In the data plane, SoftCell used multi-dimensional aggregation techniques [74] to
implement fine-grained service policies, which aimed to improve the scalability and flexibility
of the cellular core network.
OpenRadio [25] introduced a programmable wireless data plane to build a cellular core net-
work. OpenRadio offered a software abstraction layer which makes it possible to implement
the PHY and MAC layer of different wireless protocol stacks such as WiFi, LTE, and WiMax.
It did this by upgrading the control plane operation, often through software, without replacing
hardware devices, which is the case with current wireless networks. OpenRadio refactored
any wireless protocol into the processing plane and the decision plane components by pro-
viding a modular and declarative interface. The processing plane determined the actions
while the decision plane scheduled the rules. The authors claimed that this approach could
help WCNs in managing inter-cell interference as well as QoS support.
OpenRoads [144] was an SDN open-source platform providing a linkage between different
technologies such as LTE, WiFi, or WiMAX to let the user move freely between any wireless
infrastructure by decoupling mobility from the physical network. Moreover, OpenRoads in-
troduced the potentiality of increasing the capacity and coverage of the network by allowing
multiple providers to concurrently take control of the underlying infrastructure. OpenRoads
consists of three architectural layers: controller, slicing and flow. The controller is OpenFlow-
based and uses NOX as a network OS to control network devices and perform routing,
mobility management and billing. Slicing is achieved via FlowVisor [132], an SDN-based
network hypervisor, to allow multiple services to exist concurrently in the same network. Fi-
nally, SNMPVisor is used to slice the configuration of the datapath, by manipulating the level
of transmission power, channel allocation and interference control.
SoftAir [18], SDWN [29] and OpenRAN [143] provided virtualisation to support heteroge-
neous technologies. SDWN also aimed to improve user QoS and QoE via dynamic traffic
configuration and programming of the Radio Access Network.
While our survey of SDN-based cellular networks does not claim to be complete, it covers the
key proposals and works in this space. Table 3.1 summarises these solutions, and highlights
the key contributions, if they are based on OpenFlow (OF), and if they are aimed at the core
network or the Radio Access Network (RAN), or both.
24
Chapter 3: Literature Review
Table 3.1: Software Defined Cellular Networks Solutions
Project Contribution OF-based SDN changes
SoftRAN [53]Resource allocation
through task distributionNo RAN
CellSDN [89]Abstract the control functions
to meet the scalabilityYes RAN and Core
SoftCell [73] Simplifying P-GW and scalability No CoreOpenRadio [25] Support protocol evolving No Core
OpenRoads [144]Heterogeneous Wireless Network
and protocol evolutionYes RAN and Core
SoftAir [18] SDN-based 5G architecture Yes RAN and CoreSDWN [29] Improve user QoS and QoE Yes RAN
OpenRAN [143] Heterogeneous Wireless Network No RAN
3.1.2 Software Defined Wireless Sensor Networks
Wireless Sensor Networks (WSNs) consist of a set of wirelessly interconnected, resource
constrained sensor nodes. Given that these sensor nodes are typically battery powered,
energy efficiency is crucial in WSNs, which presents a challenge for topology control and
routing.
Sensor networks are application oriented networks, with the ability to support multiple appli-
cations. The hardware and software can be specified to support certain applications, and
sensor nodes can support multi-modalities such as ultrasonic, photoelectric, or temperature.
It is essential to provide an efficient abstraction to support these features to facilitate pro-
gramming. With respect to the architecture of WSNs and SDN, the central base station/sink
in a WSN can serve as the SDN controller, and the sensors nodes represent the data plane
elements. The idea is that the controller, with the global view of the network, can improve en-
ergy efficiency in sensors and overall resource allocation. Furthermore, the ability to program
the data plane and control plane makes it possible and easier to support a multi-modality
and multi-application environment. In the following, we provide a brief summary of the key
works on SDN-based WSNs.
SensorOpenFlow [91] was the first concrete proposal for a integration of SDN and WSN.
However, no experimental validation of the proposed system was made. The authors added
additional forwarding rules to OpenFlow to provide WSN specific functionality at the network
elements (sensor nodes), such as data aggregation. The SensorOpenFlow architecture
25
Chapter 3: Literature Review
was the same as other wireless SDN architecture, with the control plane decoupled from the
data plane. A key challenge here was that both the control and data traffic share the same
network (in-band control channel), which is in contrast to the typical SDN scenario.
SDN-WISE [50] introduced an extension of SensorOpenFlow. SDN-WISE offered a bet-
ter OpenFlow solution that supports stateful SDN. It proposed a finite state machine based
abstraction for data processing and transmission. It also implemented some level of local
intelligence inside sensor nodes to reduce the amount of control traffic to the global con-
troller. The functionality of SDN in SDN-WISE was divided into three layers. First was the
Forwarding layer (FWD), which forwarded packets to the best next hop according to the
rules installed in the flow table. Next was the In-Network Packet Processing layer (INPP),
which mainly performed data aggregation. Finally, the Topology Discovery layer (TD) learnt
the local topology and fed the information to the controller, so the centralised controller had
a global view of the network. However, the practicality of such an implementation can be
considered questionable, as sensors nodes typically have limited capabilities for processing
and storing data.
SDWN [38] proposed to apply SDN in WSNs in order to manage the resource allocation
such as duty-cycling and data aggregation. In the proposed architecture four layers were
added to the regular sensor node: a forwarding layer consisting of flow tables, an aggre-
gation layer, an NOS layer to provide communication with the controller, and an application
layer. The sink nodes in SDWN consisted of the following four layers. An adaptation layer
performed formatting of messages to be readable for different devices. A virtualisation layer
was responsible for gathering the network topology and providing the concurrent policies
on the same device. A controller layer created the forwarding rules based on the updated
topology. Finally, an application layer implemented the application functionality and policies.
In this architecture, the controller periodically generates a beacon packet and forwards it to
the sensor nodes. Each sensor node has a table to store the neighbours via which they have
received the beacons. After gathering this information, the sensors forward information to
the controller via a report packet to create the network topology. The overhead of passing
messages between the controller and sensors was not clearly defined. Moreover, there is
no consideration of energy efficiency. Finally, there was also no experimental validation of
the proposed architecture.
26
Chapter 3: Literature Review
Smart [51] was a proposal with a similar idea to SDWN. Here, the controller was made up
of the following layers: PHY, MAC, NOS, middleware and application. The controller had
the responsibility of choosing the best routes, creating and installing the forwarding rules in
the flow table of the sensor nodes, performing mobility management and localisation. The
sensor nodes forwarded or dropped packets according to the installed rules in their flow
table. Due to its similarity with SDWN, it also shares the same limitations.
Multi-Tasking SDWSN [146] aimed to implement energy-efficient routing in WSNs. It allowed
multiple applications to co-exist on the same network via control nodes. Three specific prob-
lems addressed in this paper are sensor activation, task mapping, and sensing scheduling.
The simulation results showed an improvement in energy efficiency with lower rescheduling
time and control overhead.
Table 3.2 summarises the above mentioned proposals to apply SDN to WSNs.
Table 3.2: Software Defined Sensor Networks Solutions
Project Contribution OF-based
SensorOpenFlow [91]Support protocol evolving
Multi-applicationsData aggregation
Yes
SDN-WISE [50]stateful OpenFlowData aggregation
Yes
SDWN [38]Resource allocationData aggregation
No
Smart [51] Resource allocation and management NoMulti-Tasking SDWSN [146] Multi-applications No
3.1.3 Software Defined Wireless Local Area Networks
The most prominent research on applying SDN to Wireless Local Area Networks (WLANs)
is Odin [136]. Odin introduced Light Virtual Access Points (LVAPs) to simplify the implemen-
tation of high-level enterprise WLAN services such as Authentication, Authorisation, and
Accounting. LAVPs provide a continuous connection between Access Points (APs) and the
users, and each user is provided with a unique BSSID (Basic Service Set Identifier). Thus,
each client only sees its own AP, regardless of the location of the physical APs. Therefore,
LVAPs could be considered as an alternative to a network hypervisor, such as FlowVisor, in
SDNs. Implementing mobility management is simplified by this method, as the user does
27
Chapter 3: Literature Review
not need to change the BSSID during the handover. The other advantage is load balancing
and mitigating the hidden terminal problem, as a result of the centralised view of the network
by the controller.
There are several extensions of Odin, which added extra improvements to the architecture.
Aerflux [129] divided the controller into the local controller, which was responsible answering
the frequent events, and the global controller, which dealt with the events that needs global
cooperation, such as load balancing. Thor [124] extended Odin in regards to energy efficient
mobility management, without sacrificing performance. OpenSDWN [128] extended Odin for
use in both home and enterprise networking management.
3.1.4 Software Defined Wireless Mesh Networks
So far, we have considered related works in the wider context of applying SDN concepts to
wireless networks. In this section, we specifically focus on the small number of works that
look at applying SDN to Wireless Mesh Networks.
Since the backbone of a WMN is a wireless multi-hop networks with limited capacity, it is
critical to perform optimal resource allocation via routing and load balancing, to maximise the
overall performance of the network. SDN, with its centralised and global view of the network,
has the potential to provide increased network performance. Another critical benefit of SDN
is the higher degree of abstraction, and following from that the increased and simplified
programmability of the network. These two aspects provide the key motivations behind the
application of SDN concepts in WMNs.
In the following, we discuss the key works in this context.
• Dely et al. [40] aimed to integrate Software Defined Networking with Wireless Mesh
Networks in order to achieve rapid deployment of new packet forwarding algorithms.
They proposed an architecture that combines OpenFlow-based SDN with WMNs, by
combining a complete implementation of a traditional WMN routing protocol (OLSR) as
well as having an OpenFlow software switch on each node. The SDN control channel is
provided as a separate Virtual Local Area Network (VLAN), implemented via a different
28
Chapter 3: Literature Review
SSID.
One VLAN is used to carry the data traffic and the other to carry the control traffic.
Data traffic forwarding is based on flow-based routing using OpenFlow. The control
traffic is routed using a traditional WMN routing protocol, OLSR in this case.
The work specifically focused on the problem of node mobility, and the proposed archi-
tecture was used to demonstrate the implementation of node handover. Experiments
on a wireless testbed showed that SDN can improve the handover performance signif-
icantly.
The key limitations of the work are that the authors did not consider routing in general,
and the focus of the paper was relatively narrow, i.e. considering node mobility only.
Furthermore, the integration of SDN and WMNs is achieved by essentially duplicating
both approaches. This means a full WMN routing protocol (OLSR) is run in combination
with OpenFlow switch functionality at each node. This imposes a high level of control
overhead, e.g. OLSR is known to have a high control traffic overhead in WMNs. In
addition, duplicating OLSR and SDN functionality on each node also causes a high
CPU load on the nodes.
• Nascimento et al. [101] proposed to add additional extensions to the current OpenFlow
protocol in order to make it compatible with WMNs. These extensions involved adding
new rules to the flow table, adding a new message to the OpenFlow protocol, and
changing the packet header to include IEEE 802.11 MAC headers. It also has an
inclusion of the minimal hardware specification in order to support efficient resource
allocation.
The authors also proposed to modify the physical interfaces of mesh routers to work
with four virtual interfaces operating in different modes such as 802.11s, ad-hoc and
AP to allow the connection of different types of devices, such as end-user to router,
or router to router. These interfaces also allow streaming of different types of traffic
such as control and data traffic. Finally, they applied their proposed architecture to an
experimental testbed. However, the experimental validation is quite limited.
This work also has the same limitation as the last work. In this study, each node
needs to run a combination of SDN and WMN protocols, i.e. it uses Hybrid Wireless
Mesh Protocol (HWMP) to forward packets which consequently increases the network
overhead. Finally, they did not discuss how the controller gathered the information
29
Chapter 3: Literature Review
about the network topology, which is one of the most critical components of a Software
Defined Network.
• Wireless Mesh Software Defined Networks (wmSDN) [41] used SDN in WMNs in es-
sentially the same way as in the approach by Dely et al. [40], with the same limitations.
The key idea is that a traditional WMN routing protocol (OLSR) is used to route SDN
control traffic, and data traffic is routed using centralised SDN control. In addition to the
proposal by Dely et al., this approach proposed to use OLSR as a backup to routing
data traffic in case communication to the centralised controller is lost. The use case
considered in this paper is the balancing of traffic between multiple gateways.
The focus of the paper is relatively narrow, and does not provide a generic framework
for SDN-based routing in WMN. Furthermore, the duplication of traditional WMN rout-
ing using OLSR with SDN-based network control suffers from the same high overhead
as the approach by Dely et al. [40].
• The paper "Controller selection in a wmSDN under network partitioning and merging
scenario" [127] assumed the basic approach of SDN integration with WMN as pro-
posed in the works discussed above. The specific focus of the paper was on the
scenario with multiple SDN controllers and unreliable connectivity of the control chan-
nel. The paper addressed the problem of controller selection in this case. As in the
above works, the focus is very narrow, and does not address a complete solution for
SDN-based routing and forwarding in WMNs.
An additional drawback of this work is that by adding additional control logic in Open-
Flow switches for the master-election process, the overhead and complexity network
elements is increased.
• "OpenFlow-based Load Balancing for Wireless Mesh Infrastructure" [142] proposed to
provide control over data flow paths and to balance the traffic load in the network in the
case of path loss. This approach adopted the in-band controller approach, thus both
the control and data traffic share the same network. The BATMAN routing protocol was
used to provide the topology discovery and link quality information and the OpenFlow
controller uses this information to determine the best routes.
Some basic experiments have been done to show how the OpenFlow controller can
control the flow of packets through the network. The limitation of this approach, similar
30
Chapter 3: Literature Review
to the approaches discussed above, is that it is a hybrid implementation of both a full
WMN routing protocol and SDN, and this duplication results in an increased overhead.
Also, the paper does not propose a routing framework, which provides a simple high
level abstraction and programmability to implement different routing problems.
• The OpenCoding protocol [148] proposed the idea of combining the OpenFlow protocol
with intra-flow network coding, i.e. a paradigm to improve network performance in
terms of throughput and energy efficiency [17] for application in WMNs. SDN principles
were leveraged to decouple the data plane from the control plane in mesh routers
and centralised the control plane in a central entity, while routers/switches are simple
forwarding devices.
The controller, called OpenCoding controller, determined the routing decisions on the
control plane. The OpenCoding controller used the same procedure as SDN controllers
such as POX, to discover the topology of the network, with some extensions in the
OpenFlow messages. In other words, OpenCoding is a specific version of OpenFlow
with the capability of intelligent packet forwarding between the OpenCoding-enabled
switches and controller.
In contrast to other related works, this work has taken advantage of intra-flow cod-
ing to forward packets between nodes. In fact, network coding functionality replaced
traditional hop-by-hop forwarding. Intra-flow random linear network coding used to for-
ward packets between mesh routers. This method is similar to opportunistic routing,
as there is no distinct next hop. That is, a node can participate in forwarding a packet if
it is closer to a certain destination than a current transmitter. As a consequence, each
OpenCoding-enabled switch needed to be equipped with packet coding, recoding, and
decoding functionality.
Experiments on a simulation showed that OpenCoding could improve the network per-
formance in comparison with other protocols such as OLSR. To avoid the interference
between the control and data channel, an out-of-band controller was used by means
of a multi-radio multi-channel technique.
In this work, the overhead of the proposed method is not fully evaluated. Finally, the
general problem of routing and optimal load balancing is not addressed.
• Labraoui et al. [84] implemented their own SDN for a routing application in WMNs.
In their approach, a node sent a request for a path to a certain destination to the
31
Chapter 3: Literature Review
SDN controller. The SDN controller used Dijkstra’s algorithm to find the shortest path
between the source and the destination. Thereafter, it installed forwarding rules on the
switches along the path. To accomplish this objective, the authors used two interfaces,
one for sending the control messages and the other for sending data. Therefore, they
used an out-of-band controller for their solution. The paper also assumed that all nodes
were connected directly to the controller, so the SDN controller could easily send the
control messages to each node without using any distributed protocol. This was in
contrast with some previous works such as [40] and [41].
This approach was implemented in ns3 and the results compared with three tradi-
tional WMN routing protocols, namely OLSR, AODV, and DSDV, using metrics such as
packet delivery ratio, throughput and overhead. The results showed that SDN-based
centralised routing could outperform the distributed protocols in the first two metrics.
However, SDN-based routing experienced more overhead than traditional WMN rout-
ing.
The authors of [84] have also provided an extension of their work in [83], combining
their SDN implementation with OLSR to provide better data throughput and reduction
of packet loss in comparison with other traditional routing protocols.
Their approach has the same limitation as the last studies. The focus of the study
is relatively narrow. Routing is limited to shortest path routing only, which does not
consider link capacity or congestion, and hence cannot achieve optimal routing and
load balancing.
Furthermore, similar to all other related works in this section, the abstraction that SDN
provides is not leveraged to provide a simple, high level interface and framework for
expressing and implementing complex routing policies. This is one of the key differ-
ences and contributions of our work presented in this thesis, in comparison to related
works.
• SD-WMNs [65] proposed an architecture for software defined wireless mesh networks
with traffic management functionality. The control plane was made up of four mod-
ules: global overview manager, routing path computation, traffic scheduling, and lastly
spectrum allocation to configure radio resources.
Software-defined mesh routers were also equipped with the following modules.
– A monitor module as a local control module to send neighbour connectivity infor-
32
Chapter 3: Literature Review
mation to the global overview manager at the controller.
– Flow tables for installing flow rules.
– OpenFlow meter tables for collecting network statistics and setting quality of ser-
vice configuration.
– Forwarding modules for sending out packets to the network interface.
– A radio frequency tuning module for assigning the radio frequency for transmitting
the data and control traffic.
Software Defined Radio (SDR) [62] was leveraged for the reconfiguration of radio pa-
rameters, e.g. frequency band. Three spectrum allocation and scheduling algorithms
were introduced to maximise the throughput. They are Fixed-Bands Non-Sharing (FB-
NS), Non-Fixed-Bands Non-Sharing (NFB-NS), and Non-Fixed-Bands Sharing (NFB-
S). These algorithms are based on the weighted throughput maximisation problem
where control traffic has a higher value compared to data traffic.
These algorithms differ in how they allocate resources to the control and data traffic in
a way that they do not interfere with each other in the radio network.
FB-NS allocates a fixed band for each link. In this approach, an assigned band cannot
be used for other traffic, which may need additional spectrum. This could lead to
congestion in some links while other links are unusable. In NFB-NS the spectrum is
not partitioned between the control and data traffic, however the control traffic has a
higher priority than the data traffic. Finally, in NFB-S all traffic can share the spectrum.
For example, data traffic can use the unused spectrum after the control traffic has been
sent.
This study mostly focused on sending the control and data traffic using SDR tech-
niques to prevent interference and provide better resource allocation. Similar to previ-
ous works, it does not address a general framework SDN-based routing in WMNs.
• "Testbed implementation for routing WLAN traffic in software defined wireless mesh
network" [87] implemented an experimental testbed with one SDN controller and five
Wireless Local Area Network Access Points (WLAN APs). However, the connection of
the SDN controller was achieved via a wired network with the WLAN APs, and openvs
(OVS) switches were located between them to allow control traffic exchange.
Each AP was equipped with three Raspberry Pi devices that could work in three dif-
ferent modes: AP mode, STA mode and OVS. As previously mentioned, the last mode
33
Chapter 3: Literature Review
was used for control messages, while the first two modes were used to exchange the
data traffic and routing information between one-hop neighbours. The focus of this
paper is quite narrow and the authors do not provide a solution for SDN-based routing
and forwarding in WMN.
• Patil et al. [116] proposed a three stage SDN-based routing approach for WMNs. They
assumed that the SDN controller is directly connected to a few switches and by the
help of basic routing all switches can establish a connection with the controller which
is the first stage of their routing approach. Once the initial connection was established,
in the second stage, the controller optimised an initial path between itself and each
switch since now it has a global view of the network.
Finally, for the last stage, the controller ran shortest path routing among switches and
set up a shortest path between switches by installing the required forwarding rules. To
achieve these capabilities, the OpenFlow switches and the OpenFlow protocol needed
to be modified to be compatible with this approach.
This work has been evaluated via the ns-3 simulator, and the performance of the net-
work has been evaluated in terms of the latency for the established paths.
The focus of this work is mostly on the bootstrapping problem of SDN-based WMNs
which is a critical component. However, they have not provided any generic routing
framework considering the link characteristics and load balancing. We can consider
this work as complementary to the work presented in this thesis.
• Amorkrane et al. [23] proposed Online Flow-based Routing in WMNs with the goal of
minimising the power consumption of mesh routers. To achieve this goal, the prob-
lem was formulated as an Integer Linear Program (ILP), which is known as an NP-hard
problem. For the solution, the paper proposed to use an ant colony based meta heuris-
tic. The authors only mentioned that this method can be integrated with SDN; however
they did not provide any detailed discussion on how this integration can be achieved.
Table 3.3 provides a summary of all the approaches to apply SDN to WMNs discussed in
this section.
34
Chapter 3: Literature Review
Table 3.3: Software Defined Mesh Networks Solutions
Project Contribution SDN Controller Type Traditional Routing
Dely et al. [40]Node handoverLoad balancing
Out-of-Band OLSR
Nascimento et al. [101]Resource allocation
RoutingOut-of-Band HWMP
wmSDN [41] Traffic engineeringIn-Band
Out-of-BandOLSR
Salsano et al. [127]Distributed Controllers
Fault toleranceIn-Band OLSR
Yang et al. [142] Load balancing In-band BATMANOpenCoding [148] OpenCoding protocol Out-of-Band -Labraoui et al. [84] Centralised routing Out-of-Band -
SD-WMNs [65]Spectrum allocation
SchedulingIn-Band
Out-of-Band-
Lee et al. [87] Routing WLAN traffic Out-of-Band -Amorkrane et al. [23] Energy optimisation envisage SDN -
Patil et al. [116] Bootstrapping In-band -
One of the key challenges in this was to deal with the problem of a potentially unreliable
wireless network, which also served as the control channel (in-band control). While some of
the works mentioned their goal is to implement an efficient SDN-based routing mechanism
for WMNs, they only focused on very narrow use cases, and did not implement a complete
solution. Furthermore, the approach of combining SDN with WMNs used in most works is
based on duplicating a full traditional routing WMN routing protocol (OLSR) with SDN control.
This has a high level of overhead, and is not efficient or scalable.
None of these works provide a general framework for SDN-based routing in WMNs, which
uses high level abstraction to express relatively complex routing problems via a simple inter-
face, and hide the complexity of finding routes from the user. As mentioned before, this is
the key point of difference and contribution of our work.
35
Chapter 4
Topology Discovery
4.1 Introduction
One of the key roles of the SDN controller is to provide and maintain a global view of the
network. The controller provides this view as an abstraction to the application layer, hiding
a lot of the complexity of maintaining and configuring a distributed network of individual
network devices.
In this chapter, we focus on topology discovery, which is a critical service provided at the
control layer of the SDN architecture, and which underpins the centralised configuration and
management in SDN. The contribution of the research includes an analysis of the overhead
of the current de facto standard for SDN topology discovery. We further propose an improved
version and implement two variants of the basic idea. Our improved method achieves the
same functionality, while reducing both controller CPU load and control traffic overhead by
up to 40%. We present experimental results which demonstrate this.
As a basis for our following discussions, Section 4.2 discusses the current state-of-the-art
approach to topology discovery in SDN, and Section 4.3 presents our proposed new ap-
proach. Sections 4.4 and 4.5 present evaluation results, and Section 4.7 provides conclud-
ing remarks.
36
Chapter 4: Topology Discovery
4.2 SDN Topology Discovery - Current Approach
In order for an SDN controller to be able to manage the network and to provide services
such as routing, it needs to have up-to-date information about the network state, in particular
the network topology. Therefore, a reliable and efficient topology discovery mechanism is
essential for any Software Defined Network.
To be precise, when we refer to topology discovery in the following, we are really concerned
with connectivity discovery or link discovery. An SDN controller does not need to discover
the network nodes (switches), since it is assumed that they will initiate a connection to the
controller, and thereby announce their existence.
OpenFlow switches do not support any dedicated functionality for topology discovery, and it
is the sole responsibility of the controller to implement this service. Furthermore, there is no
official standard that defines a topology discovery method in OpenFlow based SDNs. How-
ever, most current controller platforms implement topology discovery in the same fashion,
derived from an implementation in NOX [52], the original SDN controller. This makes that
mechanism the de facto SDN topology discovery standard. The mechanism is referred to as
OFDP (OpenFlow Discovery Protocol) in [1], and for lack of an official term, we will use that
name in this research.
OFDP leverages the Link Layer Discovery Protocol (LLDP) [14]. LLDP allows nodes in an
IEEE 802 Local Area Network to advertise to other nodes their capabilities and neighbours.
LLDP is typically implemented by Ethernet switches, and they actively send out and receive
LLDP packets. LLDP packets are sent regularly via each port of a switch and are addressed
to a bridge-filtered multicast address, and are therefore not forwarded by switches, but only
sent across a single hop.
The information learned from received LLDP packets is stored by all the switches in a local
Management Information Base (MIB). By crawling all the nodes in the network and retrieving
the corresponding information in the MIB, e.g. via SNMP, a network management system
can discover the network topology.
37
Chapter 4: Topology Discovery
Dst MAC
Src MAC
Ether-type:
0x88CC
Chassis ID
TLV
Port ID
TLV
Time to
live TLV
Opt.TLVs
End of LLDPDU
TLV
Frame check seq.
Preamble
Figure 4.1: LLDP Frame Structure
SDNController
SwitchS2
SwitchS1 Port 1Port 3
Port 2
Port 3
Port 2
Port 1
Packet-Out with LLDP pkt
Packet-In with LLDP pkt
Chassis ID = S1 Port ID = Port 1
LLDP pkt:
Chassis ID = S1 Port ID = Port 2
LLDP pkt:
Chassis ID = S1 Port ID = Port 3
LLDP pkt:
Figure 4.2: Basic OFDP Example Scenario
As shown in Figure 4.1, an LLDP payload is encapsulated in an Ethernet frame with an
EtherType field set to 0x88cc. The frame contains an LLDP Data Unit (LLDPDU), which
consists of a number of type-length-value (TLV) structures. The mandatory TLVs are Chassis
ID, which is a unique switch identifier, Port ID and Time to live, which are self explanatory,
followed by a number of optional TLVs and an End of LLDPDU TLV. In Figure 4.1, the LLDP
payload is contrasted against the Ethernet header and trailer fields via a shading in grey.
OFDP leverages the packet format of LLDP, but otherwise operates quite differently. Given
its quite narrow API and limited match-action functionality, an OpenFlow switch cannot by
itself send, receive and process LLDP messages. This needs to be initiated and executed
entirely by the controller. The process is illustrated in a very simple scenario shown in
Figure 4.2.
38
Chapter 4: Topology Discovery
First in this scenario, the SDN controller creates an individual LLDP packet for each port on
each switch, in this case, a packet for Port 1, one for Port 2 and one for Port 3 on switch
S1. Each of these three LLDP packets has the Chassis ID and Port ID TLVs initialised
accordingly. 1
The controller then sends each of these three LLDP packets to switch S1 via a separate
OpenFlow Packet-Out message, with the included instruction to send the packet out on the
corresponding port. For example, the LLDP packet with Port ID = Port 1 is to be sent out
on Port 1, the packet with Port ID = Port 2 on Port 2 and so forth. All switches have a pre-
installed rule in their flow table which says that any LLDP packet received from any port
except the CONTROLLER port, is to be forwarded to the controller, which is done via an
OpenFlow Packet-In message.
In our example in Figure 4.2, we consider the LLDP packet which is sent out on Port 1 of
switch S1 and is received by switch S2 via Port 3, via the corresponding link.
According to the pre-installed forwarding rule, switch S2 forwards the received LLDP packet
to the controller via a Packet-In message. This Packet-In message also contains meta data,
such as the ID of the switch and the ingress port via which the packet was received. From
this information, and from information about the sender switch contained in the payload of
the LLDP packet, i.e. the Chassis ID and Port ID TLVs, the controller can now infer that
there exists a link between (S1, Port 1) and (S2, Port 3), and will add the information to its
topology database.
The process is repeated for every switch in the network, i.e. the controller sends a separate
Packet-Out message with a dedicated LLDP packet for each port of each switch, allowing
it to discover all available links in the network. The entire discovery process is performed
periodically, with a new discovery round or cycle initiated in fixed intervals, with a typical
default interval size of 5 seconds.
Most current SDN controller platforms implement this topology discovery mechanism (OFDP),
such as NOX [52], POX [94], Floodlight [45], Ryu [120], Beacon [46], etc. Looking at the
source code of the topology discovery implementations reveals only some very minor varia-
1Any other unique switch identifier could be used instead of the Chassis ID, and could for example beincluded as an optional TLV.
39
Chapter 4: Topology Discovery
tions, mostly in regards to the timing of message sending. POX (and NOX) will spread the
sending of all the LLDP Packet-Out messages equally over the discovery interval. In con-
trast, Floodlight sends all these messages back-to-back at the beginning of the discovery
interval, while Ryu adds a small constant time gap between messages.
4.2.1 Controller Overhead of OFDP
Controller load and performance is critical for the scalability of a Software Defined Network
[137]. Since topology discovery is a service that typically runs continuously in the back-
ground on all SDN controllers, it is important to know the load it imposes on the controller.
The controller load due to OFDP is determined by the number of LLDP Packet-Out mes-
sages the controller needs to send and the number of LLDP Packet-In messages it receives
and needs to process.
The number of LLDP Packet-In messages (PIN_OFDP) received by the controller in a single
discovery round depends on the network topology, and is simply twice the number of active
inter-switch links in the network, one packet for each link direction.
The total number of LLDP Packet-Out messages (POUT_OFDP) a controller needs to send per
OFDP discovery round is the total number of ports in the network. As discussed earlier in
this section, the controller needs to send a dedicated LLDP packet with corresponding Port
ID and Chassis ID to each individual switch port.
With L being the number of links between switches in the network, N the number of switches,
and pi the number of ports of switch i, we can express this as follows:
PIN_OFDP = 2L (4.2.1)
POUT_OFDP =N
∑i=1
pi (4.2.2)
Sending an LLDP Packet-Out message for each port on each switch seems inefficient. A
better alternative would be to only send a single Packet-Out message to each switch, and
40
Chapter 4: Topology Discovery
ask it to flood the corresponding LLDP packet out via all its ports. This functionality is
supported in OpenFlow.
The problem is that each LLDP packet needs to have the Port ID TLV initialised to the
corresponding switch egress port. This is required so that the controller, upon receiving the
LLDP packet via a Packet-In message from the receiving switch, can determine the source
port of the discovered link. (This is illustrated in Figure 4.2.)
The current way to achieve this in OFDP, is for the controller to prepare and send a dedicated
LLDP packet via a separate Packet-Out message for each port of every switch.
A solution for the problem would be if we were able to instruct the switch to rewrite the LLDP
Port ID TLV on-the-fly, according to the port the packet is being sent out on. Unfortunately,
this is not possible, since OpenFlow switches do not support access to and rewriting of any
packet payload. In the following section, we present a solution to this problem.
4.3 Proposed Improvement - OFDPv2
The goal of OFDPv2 is to reduce the overhead of the topology discovery mechanism by
reducing the number of control messages that need to be sent by the controller. The basic
idea is simple. Instead of creating a unique LLDP packet for each port of each switch, and
sending each such packet to the corresponding switch via a separate OpenFlow Packet-In
message, as is the case in OFDP, we create and send only a single LLDP packet to each
switch. We further provide instructions to the switch to forward the LLDP packet via each of
its ports, after adding a unique port identifier which allows the receiving switch to identify the
source port.
Two properties are needed from such a port identifier. Firstly, it needs to allow the controller
to map it unambiguously to the corresponding Port ID of the switch. Secondly, the SDN
switch needs to be able to rewrite it, which is not possible for the Port ID field in the LLDP
payload.
41
Chapter 4: Topology Discovery
We use the source MAC address as the unique port identifier, since it meets these two
requirements. At the time of connection establishment between an OpenFlow switch and
controller, the switch informs the controller about its available ports, their Port IDs and the as-
sociated MAC addresses, in response to an OpenFlow OFPT_FEATURES_REQUEST mes-
sage. The controller therefore has a one-to-one mapping of MAC addresses and Port IDs
for each switch, which makes the MAC address a valid unique port identifier.
The second required feature of a port identifier in our approach is the ability to be rewritten
by SDN switches. OpenFlow supports the rewriting of packet headers, typically used for
updating TTL fields, or to implement Network Address Translation, etc. We will use this
mechanism to rewrite the source MAC address of outgoing LLDP packets.
Using these basic mechanisms, we propose a new version of the current SDN topology
discovery mechanism, and call it OFDPv2. Below, we provide further technical details, and
discuss the implementation of two variants of this basic idea, which we refer to as OFDPv2-A
and OFDPv2-B.
4.3.1 OFDPv2-A
OFDPv2-A involves the following changes to the current version of OpenFlow-based topol-
ogy discovery in SDN (OFDP):
1. We install a new set of rules on each switch, which specifies that each LLDP packet
received from the controller is to be forwarded on all available ports, and that the source
MAC address of the corresponding Ethernet frame is to be set to the address of the
port via which it is sent out. (Algorithm 1).
2. We modify the controller behaviour to limit the number of LLDP Packet-Out messages
sent to each switch to one. The Port ID TLV field in the LLDP payload is set to 0, and
will be ignored.
We further set the OFPP_TABLE [8] reserved port for each such Packet-Out message,
which indicates that the packet is to be processed via the regular OpenFlow pipeline
of the switch, i.e. via the rules installed in its flow table.
42
Chapter 4: Topology Discovery
3. Finally, we modify the Packet-In event handler on the controller, which processes in-
coming LLDP packets. Instead of parsing the Port ID TLV of the LLDP payload, we now
look at the source MAC address of the Ethernet header and lookup the corresponding
Port ID in the controller’s database, via its one-to-one mapping of MAC addresses and
switch Port IDs.
As referred to above, Algorithm 1 shows the modified LLDP packet processing on each
switch, i.e. it shows the installed match-action rules. Since LLDP packets from the controller
have the OFPP_TABLE option set, the packets will be processed according to these rules,
which are outlined in the following.
If an incoming packet is an LLDP packet (EtherType=0x88cc), and is received from the
controller via a Packet-Out message (inPort = CONTROLLER), then a copy of the packet is
sent out on each switch port (line 5), after the source MAC address has been set to the MAC
address of the port on which it is to be sent out (line 4).
OpenFlow does not support a loop construct such as used in Algorithm 1. We therefore
need to ’unroll’ the loop, and install a specific action list for each switch port.
Algorithm 1 OFDPv2 LLDP Packet Processing at Switch1: for all received packets pkt do2: if pkt.etherType=LLDP and pkt.inPort=CONTROLLER then3: for all switch ports P do4: pkt.srcMAC ← P.MACaddr5: send copy of pkt out on port P6: end for7: end if8: end for
4.3.2 OFDPv2-B
OFDPv2-B follows the same basic approach of OFDPv2-A, with only a few minor differences.
The key difference is that in this case, we do not install any specific forwarding rules at the
switches. Instead, we configure the controller to add an action list with each outgoing LLDP
Packet-Out message, which contains instructions about how to forward the packet. The
43
Chapter 4: Topology Discovery
action list essentially contains the forwarding logic specified in Algorithm 1, without the test
in line 2.
The benefit of this approach is that it does not use up any of the expensive and limited
Ternary Content Addressable Memory (TCAM) typically used in high performance hardware
SDN switches [138]. Another benefit, as we have discovered, is that OFDPv2-B can be
used in cases where the SDN switches do not support the OpenFlow OFPP_TABLE option,
which is required in OFDPv2-A. However, the benefits of OFDPv2-B come at a cost of an
increased size of the OpenFlow Packet-Out message, resulting in a higher control traffic
overhead compared to OFDPv2-A. All of this will be discussed in more detail in the following
section.
4.4 Evaluation
We have implemented both variants of our proposed topology discovery mechanism (OFDPv2)
in Python on the POX SDN controller platform [94]. Our implementation is based on discov-
ery.py, POX’s implementation of OFDP, which we also used as a benchmark for comparison.
The source code of our implementation is available via github, for both OFDPv2-A [111] and
OFDPv2-B [112].
We performed extensive tests for a wide range of network topologies, to establish the func-
tional equivalence of OFDP and OFDPv2. 2 As expected, both versions were identical in
regards to their ability to discover active links in the network. (Details about our experiments
are provided in Section 4.4.1.)
The main purpose of our evaluation was to establish by how much OFDPv2 can increase
efficiency and reduce the overhead compared to OFDP, the current de facto standard. A
key measure of the overhead imposed on the controller is the number of control message
it needs to handle. This both impacts on the controller CPU load as well as the amount of
traffic imposed on the control channel.
2Unless we specifically make the distinction between the two variants of our implementation, OFDPv2-Aand OFDPv2-B, our following discussion of OFDPv2 applies to both variants.
44
Chapter 4: Topology Discovery
There is no difference between OFDP and OFDPv2 in regards to the number of LLDP pack-
ets that are received by the controller via OpenFlow Packet-In messages. Irrespective of the
version of OFDP, this number is simply twice the number of active inter-switch links L in the
network. Therefore, based on Equation 4.2.1, we have:
PIN_OFDPv2 = PIN_OFDP = 2L (4.4.1)
We consequently focused on the number of LLDP packets that were sent out by the controller
via OpenFlow Packet-Out messages in each round of the topology discovery process. As
discussed in Section 4.2 and shown in Equation 4.2.2, in OFDP the controller sends out a
separate LLDP packet for each port on each switch in the network.
The key advantage of our modifications in OFDPv2 is that the number of LLDP Packet-Out
messages is reduced to only one per switch, or N in total, with N being the number of
switches in the network, i.e. we have:
POUT_OFDPv2 = N (4.4.2)
We define the efficiency gain G in terms of the relative number of Packet-Out control mes-
sage reduction of OFDPv2 versus OFDP. For a network with N switches, and pi ports for a
switch i, this can be expressed as follows:
G =POUT_OFDP − POUT_OFDPv2
POUT_OFDP
=∑N
i=1 pi − N
∑Ni=1 pi
= 1− N
∑Ni=1 pi
(4.4.3)
We see that the gain is greater for networks with a higher number of total ports, i.e. it is higher
for a network with a higher average switch port density. We will verify this via experiments
using a number of example topologies.
45
Chapter 4: Topology Discovery
Table 4.1: Software used in Implementation and Experiments
Software Function Version
Mininet [85] Network Emulator 2.1.0
Open vSwitch [10] Virtual SDN Switch 2.0.2
POX [94] SDN Controller Platform dart branch
Linux (Ubuntu) Host Operating System 14.04
Python Programming Language 2.7
4.4.1 Experimental Setup
For our initial experimental evaluation, we used the Linux based Mininet [85] network emu-
lator, which allows the creation of a network of virtual SDN switches and hosts, connected
via virtual links. We further used Open vSwitch [10], a virtual (software based) SDN switch
with support for OpenFlow.
Mininet has been shown to provide a high level of fidelity for realistic and reproducible net-
work experiments [57].
As previously mentioned, we used POX as our SDN controller platform, and we implemented
our proposed changes to the SDN topology discovery mechanism in Python. Table 6.1
summarises the software that was used for our prototype implementation and in all our
experiments. All experiments were run on a PC with an Intel i7-2600K CPU, running at
3.40GHz, with 8GB of RAM.
We considered four network topologies of switches and hosts in our experiments, two basic
tree topologies, a simple linear topology and a fat tree topology. Key parameters of these four
topologies, in particular the number of switches and ports, are shown in Table 4.2. Topology
1 is a tree topology with fanout f = 4 and depth d = 4. Topology 2 is also a tree topology,
but with f = 2 and d = 7. In these two tree topologies, hosts form the bottom layer of the
tree. The basic idea is illustrated in Figure 4.3, showing a smaller and easier to visualise
example of such a tree topology with f = 3 and d = 3. Switches are shown as (rounded)
squares, and hosts as ovals.
46
Chapter 4: Topology Discovery
Table 4.2: Example Network Topologies and Key Parameters
Topology # Switches # Ports
Topology 1 Tree, d = 4, f = 4 85 424
Topology 2 Tree, d = 7, f = 2 127 380
Topology 3 Linear, N = 100 100 298
Topology 4 Fat Tree 20 80
Figure 4.3: Tree Topology with depth 3 and fanout 3
During the topology discovery process, all switches send out LLDP packets on all their ports,
including the edge ports which are connected to hosts. However, hosts do not understand
LLDP or OpenFlow, and will simply ignore any LLDP packet they receive. 3
Topology 3 is a simple linear topology of N=100 switches, with a host attached to each
switch. Finally, Topology 4 is a small fat tree [88], as often used in data centre networks.
The topology has 20 switches and a total of 80 switch ports, and is shown in Figure 4.4.
As in the other tree topologies, hosts are attached to the switches at the bottom layer of the
topology.
3The discovery of hosts in an SDN network is a separate issue, not addressed by the topology discoverymechanism and therefore beyond the scope of this research.
47
Chapter 4: Topology Discovery
Figure 4.4: Fat Tree Topology
4.4.2 Number of Packet-Out Control Messages
In our first experiment, we instrumented the POX controller to collect statistics about the
number of Packet-Out messages sent by the topology discovery component in each discov-
ery cycle, for each of our four example topologies. We ran each experiment 10 times, with
identical results, as expected. As also expected, both variants of our proposed improvement,
OFDPv2-A and OFDPv2-B produced identical results. While they differ in the approach in
which Packet-Out messages are sent, the number of these messages sent is the same. Un-
less specifically mentioned, we will therefore not differentiate between the two variants when
presenting the results in this section and simply use the generic term OFDPv2.
Table 4.3 shows the measured results, as well as the relative reduction in the number of
control messages, i.e. the efficiency gain G of OFDPv2 over OFDP, as defined in Equa-
tion 4.4.3. We see that the experimental results correspond to Equations 4.2.2 and 4.4.2
and the relevant parameters for the various topologies. For example, since Topology 1 has
85 switches and 424 ports, OFDP requires 424 LLDP Packet-Out messages compared to
the 85 of OFDPv2, as expected.
48
Chapter 4: Topology Discovery
Table 4.3: Number of LLDP Packet-Out Control Messages
OFDP OFDPv2 Efficiency Gain G
Topology 1 424 85 80%
Topology 2 380 127 67%
Topology 3 298 100 67%
Topology 4 80 20 75%
0
50
100
150
200
250
300
350
400
450
Topology 1Tree d=4 f=4
Topology 2Tree d=7 f=2
Topology 3Linear N=100
Topology 4Fat Tree
Num
ber o
f Pac
ket-
Out
mes
sage
s
OFDPOFDPv2
Figure 4.5: Number of Packet-Out Messages
Figure 4.5 shows a graphical representation of these experiment results. It is evident that
OFDPv2 achieves a great reduction in the number of LLDP Packet-Out control messages,
with up to 80% fewer messages for Topology 1, and a minimum reduction of 67% for Topolo-
gies 2 and 3.
As per Equation 4.4.3, the degree of efficiency gain of OFDPv2 over OFDP solely depends
on the total number of ports and switches in the network, and no other topology characteris-
tics.
49
Chapter 4: Topology Discovery
4.4.3 Control Traffic Overhead
The reduction in the number of required control messages in the topology discovery mecha-
nism obviously has a direct impact on the control traffic overhead. It would be straightforward
to calculate the control traffic given we know the number of control packets that are sent
out per discovery interval, if the packet size was known and constant. However, the size of
Packet-Out messages can vary due to the variable length encoding of TLV fields in the LLDP
packet, e.g. Chassis ID or Port ID. Furthermore, in the case of OFDPv2-B, the size of the
action list part of the Packet-Out message varies depending on switch characteristics, e.g.
the number of ports. We therefore simply measured the control traffic overhead imposed by
the topology discovery mechanism, using a packet capture tool (Wireshark [140]).
Wireshark output illustrates that for OFDPv2-B each Packet-Out message with no action is
made of 984 bits. Also the controller would add two actions for each port in the Packet-
Out message, which consequently add 192 bits to the Packet-Out message per ports. On
the other hand, each digit of the DPID of the switch add 16 extra bits to the Packet-Out
messages. So for a network with N switches, and pi ports for a switch i and if the switch
DPID, used as Chassis ID, is made up of mi digits, this can be expressed as follows:
Packet−Outsize = 984 + pi ∗ 192 + 16 ∗mi (4.4.4)
This equation shows the relevance of the increment in Packet-Out message size vs. number
of ports. This justifies the increased overhead of OFDPv2-B rather than OFDPv2-A, which
is the result of the following experiment.
Our experiment was conducted on the same four topologies as the previous experiment,
using the POX controller’s default discovery interval of 5 seconds. For each topology, we
measured the OFDP control traffic overhead (in kbps) due to Packet-Out messages. We
also performed the same measurement for both variants of our proposed improvements, i.e.
OFDPv2-A and OFDPv2-B.
Figure 4.6 shows this control traffic overhead for the considered topology discovery mech-
anisms for our four topologies. As to be expected, both variants of OFDPv2 achieve a
50
Chapter 4: Topology Discovery
0
10
20
30
40
50
60
70
80
90
100
Topology 1Tree d=4 f=4
Topology 2Tree d=7 f=2
Topology 3Linear N=100
Topology 4Fat Tree
Con
trol
Tra
ffic
Ove
rhea
d (k
bps)
OFDPOFDPv2-AOFDPv2-B
Figure 4.6: Bandwidth Usage of Topology Discovery
significant reduction in control traffic overhead.
The control traffic overhead is simply the total number of bits of the Packet-Out messages.
OFDPv2-A achieves a reduction ranging from 80% in Topology 1, to a minimum reduction of
66% in Topology 3. OFDPv2-B has a higher control traffic overhead compared to OFDPv2-
A, but still achieves an improvement over OFDP of up to 63% for Topology 1 and a minimum
reduction of 50% for Topologies 2 and 3.
The increased overhead of OFDPv2-B stems from the fact that Packet-Out messages are
larger than in the case of OFDPv2-A, since they include an action list with instructions on
how to handle packet. In contrast, OFDPv2-A does this by having the corresponding rules
installed in the switches’ flow tables. Essentially, OFDPv2-B trades off the use of a slightly
smaller amount of (limited and expensive) TCAM memory, typically used on high perfor-
mance SDN switches, for an increased control traffic overhead.
Overall, both variants of OFDPv2 achieve a significant reduction in control traffic overhead
over the state-of-the-art (OFDP). While the reduction in absolute terms might not be huge, in
particular for scenarios with a dedicated out-of-band control channel with high capacity links,
it is significant for SDNs with in-band control scenarios with limited link capacity, in particular
for wireless links.
51
Chapter 4: Topology Discovery
In addition, for controllers like Floodlight, which send all topology discovery related Packet-
Out messages as a burst at the beginning of each discovery interval, our proposed improve-
ment can significantly reduce the resulting spike in load on the control channel.
4.4.4 Impact on Controller CPU Load
Controller load is critical for any SDN application and is a key factor for network scalability
and performance [137]. We are interested in how the reduction in LLDP Packet-Out control
messages achieved in OFDPv2 reduces the CPU load imposed on the controller by the
topology discovery service. In our experiment, we continuously ran the topology discovery
service, initiating a new discovery round every 5 seconds, which is the default interval in
POX. No other service or application was running at the controller, which means that the
CPU load caused by the POX process is a good indication of the topology discovery service
load.
We started our measurements after the initial network initialisation, e.g. the establishment
of all switch-controller connections and handshakes, had been completed. The duration of
each experiment was 300 seconds.
To measure CPU time, we used the cpu_percent() function from psutil, a cross-platform
process and system utilities module for Python [12].
Figure 4.7 shows the cumulative CPU time consumed by the POX controller running only
the topology discovery module for OFDP, OFDPv2-A, and OFDPv2-B. The figure shows the
results of a single run of the experiment for Topology 1. The CPU time is plotted in 1 second
intervals. In this scenario, OFDPv2-A and OFDPv2-B achieve an almost 40% reduction
in CPU load compared to OFDP. This indicates that the processing and sending of LLDP
Packet-Out messages is a significant component of controller CPU load due to the topology
discovery service, and a reduction of these messages directly results in a lower load for the
controller.
Also for all versions of topology discovery, the cumulative CPU time relatively gradually and
smoothly increases, indicating that there are no major bursts of CPU activity. This is partly
52
Chapter 4: Topology Discovery
0
1000
2000
3000
4000
5000
6000
7000
0 50 100 150 200 250 300
Cum
ulat
ive
Dis
cove
ry C
PU T
ime
(ms)
Time (s)
OFDP OFDPv2-A OFDPv2-B
Figure 4.7: Cumulative CPU Time of Topology Discovery
due to the fact that in POX, all LLDP Packet-Out messages are evenly spread out over the
discovery interval.
We repeated the same experiment 20 times for each of our four example topologies. Fig-
ure 4.8 shows the total CPU time used by the topology discovery process over the entire
duration of the experiment (300 seconds). The figure shows the average over the 20 experi-
ment runs and also indicates the 95% confidence interval for the data.
In summary, we observed a reduction in CPU time and load, ranging from a minimum of
20% for Topology 3 up to 40% for Topology 1, and potentially greater for networks with a
higher port density 4.
While this is less than the 67% to 80% reduction of the number of LLDP Packet-Out mes-
sages achieved by OFDPv2, as shown in Table 4.3, it is as expected. OFDPv2 only achieves
a reduction of the number of LLDP Packet-Out messages, whereas the number of LLDP Packet-In
messages is unchanged.
However, a CPU load reduction of up to 40% for a central component of any SDN architecture
is a significant improvement over the state-of-the-art.
4We also measured the CPU load and time for a topology with one switches at top connected to 96 switchesto investigate the feasibility of our proposed methods on network with high capacity, the results showed thatOFDPv2-A and OFPDv2-B had a lower overhaed rather than OFDP.
53
Chapter 4: Topology Discovery
0
1000
2000
3000
4000
5000
6000
7000
8000
Topology 1Tree d=4 f=4
Topology 2Tree d=7 f=2
Topology 3Linear N=100
Topology 4Fat Tree
Cum
ulat
ive
Dis
cove
ry C
PU T
ime
(ms)
OFDPOFDPv2-AOFDPv2-B
Figure 4.8: Cumulative CPU Time of Topology Discovery
4.5 Testbed Validation
To further validate our emulation results based on Mininet, we have conducted a range
of experiments on the OFELIA [37, 135] SDN testbed. OFELIA is a federated, OpenFlow-
based SDN testbed distributed across a number of sites, or islands, in a number of European
countries, including the UK, Switzerland, Germany, Belgium, Spain and Italy. At the time of
performing our experiments, OFELIA consisted of 10 islands, each equipped with a range
of SDN hardware switches, supporting the OpenFlow 1.0 standard.
Experiments in OFELIA can be configured via a web based interface called Expedient [6] .
In particular, Expedient allows the configuration of a virtual network or a slice of the physical
network, based on Flowvisor [132]. Once a slice or Flowspace is granted, the experiment
can be configured. Expedient further allows the instantiation of virtual machines that act as
the SDN controller as well as hosts.
Our aim was to configure the largest possible network for our experiments. Due to some
hardware and configuration problems, the largest network topology that we were able to
configure for our experiments consisted of 16 switches and 30 ports, distributed across four
islands: Trento (Italy) with 7 switches (T1, ... T7), Barcelona (Spain) with 4 switches (B1,
..B4), Zürich (Switzerland) with 3 switches (Z1, Z2, Z3) and Ghent (Belgium) with 2 switches
(G1, G2). The Switch G1 located at Ghent acts as a central hub which connects the different
54
Chapter 4: Topology Discovery
Zürich, Switzerland
Trento,Italy
Barcelona,Spain
Ghent,Belgium
B1
B3 B4
T7
02
:0
8:
02
:0
8:
00
:0
0:
00
:0
1
T3
T5
T1 T4
T2
T6
02
:0
8:
02
:0
8:
00
:0
0:
00
:0
1
Z3
Z1 Z2
G1
B2
G2
Figure 4.9: OFELIA Topology
islands. This topology is shown in Figure 4.9. The SDN switch model used in this topology
is NEC IP8800//S3640-24T2XW, which is an OpenFlow-based switch running OpenFlow
version 1.0, in all islands, with the exception of Trento [5], which also included switches
based on the NetFPGA platform [100].
4.5.1 OFELIA Experiment
The main goal in performing our experiments on the OFELIA testbed was to validate our
emulation experiment results based on Mininet. In particular, we were interested to see if the
same level of efficiency gain could be achieved by OFDPv2 over OFDP. In this experiment,
we specifically focused on the CPU controller load.
Due to technical limitations of the OFELIA switch hardware, we were only able to implement
one of the two variants of OFDPv2, i.e. OFDPv2-B. As discussed in Section 4.3.1, OFDPv2-
A relies on the OFPP_TABLE option in the OpenFlow Packet-Out message, which instructs
the switch to forward the packet according to the rules in its flow table. After lengthy trials,
and discussions with OFELIA island managers, it seemed that the OFELIA SDN switches
did not support this feature, even though it is part of the OpenFlow 1.0 standard.
55
Chapter 4: Topology Discovery
0
500
1000
1500
2000
2500
3000
0 50 100 150 200 250 300
Cum
ulat
ive
Dis
cove
ry C
PU T
ime
(ms)
Time (s)
OFDP OFDPv2-B
Figure 4.10: Cumulative CPU Time of Topology Discovery in OFELIA Topology
For our experiment, we used the exact same POX implementation of OFDP and OFDPv2-B
as in our Mininet experiments discussed in Section 4.4.4. 5 We used the same experiment
scenario as in our Mininet experiments, e.g. using an experiment duration of 300 seconds,
and we also used the same approach to measure CPU load. The only difference in this
scenario is that we decreased the time interval between discovery rounds from 5 seconds
to 0.3 seconds. This increased the absolute values of the CPU load caused by the topology
discovery mechanism and increased the "signal-to-noise ratio" and hence the accuracy of
our measurements, for this relatively small topology. However, decreasing the discovery
interval does not affect the relative performance of OFDP and OFDPv2, which is our main
focus here.
Figure 4.10 shows the results from our OFELIA experiment, in particular it shows the cumu-
lative CPU time consumed by the POX controller running only the topology discovery module
for both OFDP and OFDPv2-B. The figure shows the result of a single run of the experiment,
and the CPU time is plotted in 1 second intervals.
We can see that OFDPv2-B consumes a total of around 2,000 milliseconds of CPU time
over the period of the experiment, compared to roughly 2,500 milliseconds of OFDP. This
equates to a reduction of 20% achieved by OFDPv2-B.
The efficiency gain is less than what we saw in our Mininet experiments in Section 4.4.4.
5This demonstrates the benefit of Mininet-based emulation over simulation, since code can be migrated toa real network with minimal effort.
56
Chapter 4: Topology Discovery
0
500
1000
1500
2000
2500
3000
OFELIA Mininet
Cum
ulat
ive
Dis
cove
ry C
PU T
ime
(ms)
OFDPOFDPv2-B
Figure 4.11: Cumulative CPU Time of Topology Discovery in OFELIA and Mininet
However, this is to be expected, since the efficiency gain corresponds to the port density,
which is very low here. We said that CPU load depends on the number of LLDP packets,
and this in turn depends on the total number of ports in the network for OFDPv2. Here, we
have a small number of ports, therefore, gain is expected to be less.
We emulated our OFELIA topology in Mininet, and performed the identical experiment.
Figure 4.11 shows the result. The figure shows the total CPU time over the 300 seconds for
both OFDP and OFDBv2-B, from both the Mininet and corresponding OFELIA experiments.
The results are averaged over 20 experiment runs, and the graph also shows the narrow
95% confidence intervals. We observe that the OFELIA and Mininet results differ in term of
their absolute values. This is to be expected, since the SDN controllers in the two scenarios
run on different hardware with different CPU speeds. However, what is critical here is that
the relative improvement is almost identical, with a reduction in CPU load of OFDPv2 over
OFDP of 20% in both cases.
While we were unable to implement and evaluate OFDPv2-A on OFELIA, we expect the
results to be almost idential to the ones achieved by OFDPv2-B, as has been the case in all
our Mininet experiments, and has has been shown in Figure 4.8. In summary, the OFELIA
experiments validate the fidelity of our results obtained via emulation in Mininet, as discussed
57
Chapter 4: Topology Discovery
S10 S12S11S9
S6 S8S7S5
S2 S4S3S1
S14 S16S15S13
H1
H2
Figure 4.12: Mesh Topology in Mininet-ns3-WiFi
in Section 4.5.
4.6 Mininet-ns3-WiFi Experiment
Mininet [85], a Linux-based network emulator, is widely used for Software Defined Network
experiments, due to its in-built support for OpenFlow switches. However, Mininet currently
only supports very basic emulation of wireless links. A recent work has addressed this
limitation by using the real-time feature of ns-3 network simulator and integrated its IEEE
802.11 channel emulation feature with Mininet. We refer to this hybrid testbed as Mininet-
ns3-WiFi [78]. We will discuss this testbed in more details in Chapter 6.
The main goal in performing our experiments on the Mininet-ns3-WiFi testbed was to validate
our emulation experiment for Wireless Mesh Networks scenarios. In particular, we were
interested to see if the same level of efficiency gain could be achieved by OFDPv2 over
OFDP. In this experiment, we specifically focused on the CPU controller load.
For our experiments, we have configured the scenario with 16 OpenFlow switches shown in
Figure 4.12 in Mininet-ns3-WiFi and Mininet. Since Mininet does not support wireless links,
we have used the IEEE 802.11g physical and MAC layer module from ns-3 [106] to emulate
wireless links.
58
Chapter 4: Topology Discovery
0
2000
4000
6000
8000
10000
12000
14000
0 50 100 150 200 250 300
Cum
ulat
ive
Dis
cove
ry C
PU T
ime
(ms)
Time (s)
OFDP OFDPv2-A OFDPv2-B
Figure 4.13: Cumulative CPU Time of Topology Discovery for Mesh Topology in Mininet-ns3-WiFi
We also used the exact same POX implementation of OFDP and OFDPv2 as in our Mininet
experiments discussed in Section 4.4.4. We used the same experiment scenario as in our
Mininet experiments, e.g. using an experiment duration of 300 seconds, and we also used
the same approach to measure CPU load. We decreased the time interval between discov-
ery rounds from 5 seconds to 0.3 seconds to increase the absolute values of the CPU load
for this relatively small topology as explained in Section 4.5.1.
Figure 4.13 shows the results from our Mininet-ns3-WiFi experiment, in particular it shows
the cumulative CPU time consumed by the POX controller running only the topology discov-
ery module for OFDP, OFDPv2-A, and OFDPv2-B. The figure shows the result of a single
run of the experiment, and the CPU time is plotted in 1 second intervals.
In this scenario, we can see OFDPv2-A and OFDPv2-B achieve an almost 15% reduction in
CPU load compared to OFDP. OFDBv2-A and OFDPv2-B consume a total of around 11,000
milliseconds of CPU time, compared to roughly 13,000 milliseconds of OFDP over the period
of experiment.
We emulated our Mininet-ns3-WiFi topology in Mininet, and performed the identical experi-
ment.
Figure 4.14 shows the result. The figure shows the total CPU time over the 300 seconds
59
Chapter 4: Topology Discovery
0
2000
4000
6000
8000
10000
12000
14000
Mininet Mininet-ns3-WiFi
Cum
ulat
ive
Dis
cove
ry C
PU T
ime
(ms)
OFDPOFDPv2-AOFDPv2-B
Figure 4.14: Cumulative CPU Time of Topology Discovery in Mininet-ns3-WiFi and Mininet
for both OFDP and OFDBv2-B, from both the Mininet and corresponding Mininet-ns3-WiFi
experiments. The results are averaged over 10 experiment runs, and the graph also shows
the narrow 95% confidence intervals. We observe that the Mininet-ns3-WiFi and Mininet
results are consistent with each other as expected, since the SDN controllers in the two
scenarios run on the same hardware and the relative improvement is almost identical, with
a reduction in CPU load of OFDPv2 over OFDP of 15% in both cases.
In summary, the Mininet-ns3-WiFi experiments validate the fidelity of our results obtained via
emulation in Mininet.
4.7 Conclusions
In this chapter, we have addressed the issue of topology discovery in OpenFlow based Soft-
ware Defined Networks. Topology discovery is a key component underpinning the logically
centralised network management and control paradigm of SDN, and is a service provided
by all SDN controller platforms. We have discussed OFDP, the current de facto standard
for SDN topology discovery. OFDP has been implemented by NOX, the original SDN con-
troller, and has since been adopted by most if not all key SDN controller platforms. We
60
Chapter 4: Topology Discovery
have analysed the overhead of OFDP in terms of the controller load, and have proposed
and implemented two variants of an improved version, which we informally call OFDPv2.
Our modified version is identical in terms of discovery functionality, but achieves this with a
significantly reduced number of control messages that need to be handled by the controller
and SDN switches.
Via experiments, we have demonstrated that our proposed modifications significantly reduce
the control traffic overhead as well as the CPU load imposed on the SDN controller, with a
reduction of up to 40% for both metrics in our considered example topologies. Given that
the controller is often the performance bottleneck of a Software Defined Network, making a
core service such as topology discovery more efficient can have a significant impact on the
overall network performance and scalability. Our proposed changes are compliant with the
OpenFlow standard. They are also simple and very practical, and can be implemented with
relatively minimal effort, as outlined in this chapter.
Finally, we are not aware of any related works that have analysed the overhead of topology
discovery in SDN, or have proposed any related improvements to the current state-of-the-art.
While the work presented in this chapter is highly relevant for wireless networks, the benefits
can be equally relevant for wired SDNs.
61
Chapter 5
Link Capacity Estimation
5.1 Introduction
Most SDN controllers implement a topology discovery service which uses probe packets to
infer the existence of links in the network [113]. Information about the current network traffic
or flows can be obtained by a controller via requesting port and flow statistics from switches
using OpenFlow.
The other critical piece of information in this context is the capacity of network links. This
information is readily available in wired networks, e.g. the capacity of a 10 Gigabit Ethernet
link is known and constant. However, this is not the case in wireless networks, where the
capacity of links can vary significantly, depending on factors such as range, interference,
fading, etc. It is therefore important to have a method to measure or estimate wireless link
capacity.
In this chapter, we explore the use of packet pair/train probing for link capacity estimation in
wireless SDNs. Our goal is to provide a solution that works with any OpenFlow compliant
switch and controller, and does not need access to any low level link information, such as
RSSI, or transmission statistics as used in some existing capacity estimation approaches
[54]. A further goal was to implement the link capacity estimation mechanism as a network
service provided by the controller, without any involvement of end hosts.
62
Chapter 5: Link Capacity Estimation
We have implemented packet pair/train probing based link capacity estimation mechanism
in the Ryu SDN controller platform and have evaluated it in an emulated wireless network.
We have shown that for the right train length, the estimation accuracy is very good. We also
evaluated the impact of cross traffic on the accuracy of the mechanism, and found that cross
traffic in the reverse direction to the probe packets significantly reduces the accuracy, which
is a well-known problem in packet pair/train probing. We have proposed and implemented
a mechanism that is able to compensate and correct this error, and we have demonstrated
that this approach makes our link capacity estimation method immune to cross traffic.
The rest of the chapter is organised as follows. Section 5.2 gives an overview of relevant
bandwidth and link capacity estimation approaches. Section 5.3 presents our proposed link
capacity estimation mechanism based on packet pair probing, and shows our experimental
evaluation. Section 5.4 extends the approach to packet train probing, investigates the im-
pact of cross traffic and presents a solution to the identified problem of loss of estimation
accuracy. Section 5.5 concludes the chapter.
5.2 Related Work
There exists a large body of works on general bandwidth and capacity estimation [119, 134].
In order to give an overview of the key approaches, it is useful to provide a couple of basic
definitions. Link capacity is defined in [119] as ‘the maximum possible IP layer transfer rate
at a link’, and we will use this definition in the context of this chapter. It is important to note
that this metric is lower than the link capacity defined at the physical or link layer, due to
various overheads, e.g. encapsulation. The metric is independent of the traffic load on the
link. It is important to distinguish this from the term ‘available bandwidth’, which is defined
as ‘the minimum spare capacity’ in [119]. The available bandwidth is simply the link capacity
minus the total traffic load of the link.
One such estimation method is Variable Packet Size probing (VPS) [28, 70]. The key idea
is to measure the RTT from the source to each hop on the path as a function of the probing
packet size. The RTT to each hop consists of three components, serialisation (transmission)
delay, propagation delay and queueing delay. The goal is to isolate the serialisation delay
63
Chapter 5: Link Capacity Estimation
∆ = L/C, for a packet of size L, and a link transmission rate C. The measurement of ∆
for a given packet size L allows to compute the link capacity C. VPS sends multiple probe
packets for a range of packet sizes. The minimum delay is chosen, since it is assumed that
at least one packet will not experience any queueing delay. The results for different packet
sizes are used to remove the propagation delay component, since this delay component is
independent of the packet size.
Packet pair/train dispersion probing [69], or simply packet pair probing, has been designed to
measure the end-to-end capacity of a network path. The source sends pairs of equal sized
packets back-to-back. The dispersion of a packet pair at a link is the difference in arrival
time between the two probe packets. If we only consider a single link, the time dispersion ∆
is caused by the transmission delay (or serialisation delay), and is calculated as ∆ = L/C,
for a packet size L and link capacity C 1. Therefore, if we can measure ∆, we can calculate
the link capacity simply as follows:
C =L∆
(5.2.1)
For an end-to-end path, the dispersion ∆R measured at the receiver is the maximum dis-
persion of all the individual links, i.e. the dispersion caused by the bottleneck link. The
end-to-end path capacity C, which is the capacity of the bottleneck link, is then computed as
C = L/∆R.
The problem with this approach is that it assumes the lack of any cross traffic, which can
cause probe packets to be interleaved with other packets, and thereby inflating the dispersion
measurements. This is obviously not a very practical and realistic assumption. A number
of papers propose to use multiple packet pairs and statistical filtering to mitigate the errors
caused by cross traffic.
Packet train probing is an extension of packet pair probing and uses multiple back-to-back
packets instead of only two. While this does typically not increase the estimation accuracy
in terms of the average value, it can decrease its variance [69].
1For most typical links, in particular short range wireless links, we can ignore propagation delay. Queuingdelay on the ingress interface is also negligible for a single link.
64
Chapter 5: Link Capacity Estimation
Self-Loading Periodic Streams (SLoPS) is a method for estimating end-to-end available
bandwidth [71]. The method requires the source sending a stream of packets (typically
around 100) at a certain rate. The approach monitors variations of the one-way delay ex-
perienced by probe packets. If the sending rate R is greater than the available bandwidth
A, packets will incur additional queueing delay at the bottleneck link. By varying R and
measuring the end-to-end packet delay, it is possible to estimate A.
Trains of Packet Pairs (TOPP) [97] is also an end-to-end available bandwidth estimation
approach similar to SLoPS. The key differences are in terms of the statistical processing of
the measured parameters.
Having reviewed the existing bandwidth or capacity estimation approaches, we believe that
packet pair/train probing is the most promising approach for wireless SDNs. It is simple,
relatively low cost, and it can measure link capacity and not just available bandwidth. The
main limitation of packet pair probing is its susceptibility to cross traffic. We will show a
method to overcome this problem. To the best of our knowledge, there are no published
works on wireless link capacity estimation in SDN.
5.3 Packet Pair Probing in SDN
In order to implement packet pair probing for link capacity estimation in SDN, we need to
be able to transmit two packets back-to-back across a link, and measure their respective
arrival times in order to compute the packet dispersion ∆. Traditional bandwidth estimation
methods, including packet pair probing, are implemented on an end-to-end basis, with hosts
sending and receiving the probe packets. In SDN, the link capacity measurement needs to
be performed by the controller and switches, and we do not want to involve any hosts.
Our approach is inspired by the topology discovery mechanism implemented by most SDN
controller platforms [114]. In order to initiate the sending of the packet pair across a link,
the SDN controller sends an OpenFlow Packet-Out message to a switch, with a 1024 byte
probe packet P, as well as an action list with forwarding instructions. The list consists of two
identical actions, saying that packet P is to be forwarded twice via a particular switch port.
65
Chapter 5: Link Capacity Estimation
CController
S2S1P2
P1
P2
P1
Wireless Link
H1 H2
Distance (d)
Figure 5.1: Basic Network Scenario
The packet in the Packet-Out message is marked as a probe packet. We do this by setting
the EtherType field in the Ethernet header to a unique, unused value. Prior to starting the
probing process, the controller also installs a rule on each of the switches instructing them
to send any probe packets directly to the controller.
A simple example scenario is shown in Figure 5.1. Here, we have an SDN controller C,
two SDN switches S1 and S2 connected via a wireless link of distance d, and two hosts
H1 and H2 attached to the switches via wired links. The controller C sends a Packet-Out
message to switch S1 with a marked probe packet P and with the instructions to send out
two copies back-to-back via port P2. Switch S2 receives the first packet, looks up its flow
table, and forwards it to the controller. When the second packet arrives, it is also forwarded
to the controller. In order to estimate the capacity of the link (S1,S2), we need to measure
the packet time dispersion ∆ = t2− t1, i.e. the difference in arrival time of the second packet
t2 and the arrival time of the first packet t1.
Ideally, we would measure the arrival times of the packets when they arrive at switch S2 via
port P2. We refer to this version of the measured packet dispersion as ∆S = t2S − t1S . How-
ever, adding time stamps to received packets and sending the information to the controller
is currently not supported in OpenFlow switches. Since our goal is to find a solution that
works for any OpenFlow standard compliant controller and switch, we need an alternative
approach.
The only other practical option we have to measure the packet dispersion is by doing it via
the controller, i.e. by measuring the arrival time of the Packet-In messages that carry the
66
Chapter 5: Link Capacity Estimation
respective probe packets. This controller-based dispersion measurement ∆C is computed
from the arrival times at the controller t1C and t2C , i.e. ∆C = t2C − t1C .
With the measured packet dispersion ∆, we can now compute the link capacity estimation
as per Equation 1. We have CS = L/∆S for the capacity estimation based on the timing
measurement at the switch, and CC = L/∆C for the one based on the timing measurement
at the controller. We expect CC to be less accurate than CS, since we expect the timing
measurements at the controller to be less accurate.
5.3.1 Experiment
We have implemented our link capacity estimation in Python using the Ryu SDN controller
platform [120]. For our experiments, we have configured the simple scenario shown in Fig-
ure 5.1 in Mininet [85], a Linux-based network emulator with built-in support for SDN. Mininet
uses Open vSwitch [10], a software SDN switch supporting OpenFlow (version 1.0).
Since Mininet does not support wireless links, we have used the IEEE 802.11g physical
and MAC layer module from ns-3 [106] to emulate our wireless link. The ns-3 network
simulator has the ability to work in so-called real-time/emulation mode, where the simulator
can exchange packets in real-time with the outside world. Packets originating from simulated
nodes can be processed by a real network. Additionally, this also allows driving a simulated
network with packets from real nodes. We are using this second option. The simulated
network in our case is the single wireless link between switches S1 and S2. The ‘real nodes’
are the virtual Mininet nodes. Details on this integration of Mininet and ns-3 which we used
for wireless link emulation in our experiments are available in [78].
We used Iperf [2] to measure the wireless link capacity and we used this measurement
as a reference for comparison with our packet pair probing based estimations. The Iperf
measurements were done between host H1 and host H2, at a separate time from when
packet pair probing occurs. Since both hosts are connected via a non-bandwidth-limited
virtual link to the switches, the wireless link is the bottleneck, and consequently Iperf reports
its capacity.
67
Chapter 5: Link Capacity Estimation
Table 5.1: Software used in Implementation and Experiments
Software Function Version
Mininet [85] Network Emulator 2.1.0
ns-3 [106] 802.11 Link Emulation ns-3.21
Iperf [2] Link capacity measurement 2.05
Open vSwitch [10] Virtual SDN Switch 2.0.2
Ryu [120] SDN Controller Platform 3.19
All our experiments were run on a PC with an Intel i7-2600K CPU running at 3.40GHz, with
8GB of RAM. Table 5.1 lists the key software tools that we have used for our prototype
implementation and in all our experiments.
For our first experiment, we measured the link capacity of the link (S1, S2) (shown in Fig-
ure 5.1) using Iperf (UDP). We varied the distance of the link by changing the node position
in the ns-3 link emulation code, and we considered distances from 0 to 115 meters. All
experiments were performed 10 times, and Figure 5.2 shows the averages and the corre-
sponding 95% confidence intervals. The Iperf results are shown as the black line in the
figure. As expected, we see a gradual decrease of the achieved throughput from around 24
Mbps for a distance of d = 0m to around 1 Mbps for a distance of 115m. Beyond a distance
of 115m, Iperf fails to report any measurement results, and the link is disconnected. This
result is consistent with measurement on real 802.11 links [54], and gives us confidence in
the validity of our ns-3 based link emulation.
Figure 5.2 also shows the result of our link capacity estimation CC using packet pair probing
via time dispersion measurements at the controller. We see that the estimation significantly
underestimates the actual capacity (as measured by Iperf) for short distance (high capacity)
links. The estimation accuracy is markedly better for low capacity links.
For comparison, we also measured CS, the link capacity estimation based on packet arrival
time measurements at the switch, rather than at the controller. We obtained the time mea-
surements t1S and t2S from ns-3. While we can do this in our experiment, this is obviously
not practical in a real network, and as mentioned previously, timestamping of packets is not
supported in OpenFlow. However, the results are illustrative, and show that the estimation
accuracy can be significantly improved in this approach.
68
Chapter 5: Link Capacity Estimation
0
2
4
6
8
10
12
14
16
18
20
22
24
26
0 10 20 30 40 50 60 70 80 90 100 110 115
Lin
k C
apac
ity (M
bps)
Distance (d) in meters
Iperf Controller Switch
Figure 5.2: Link Capacity Estimation using Packet Pair Probing
Since we want a link capacity estimation approach that is compatible with current OpenFlow-
based SDNs, we will focus in the remainder of the chapter on the controller-based approach,
and will try to improve its accuracy.
We believe the notable underestimation of CC for high capacity links is due to the limited
speed of the controller in reading and processing back-to-back Packet-In messages. Buffer-
ing of the second probe packet while processing the first one results in a greater value of the
time dispersion ∆C, and consequently in a lower link capacity estimation. In order to avoid
this problem, and to give the controller more time for the processing of the corresponding
Packet-In messages, we are investigating packet train probing in the following section.
5.4 Packet Train Probing in SDN
In packet train probing [119] [42], a single pair of back-to-back probe packets is replaced by
a greater number T of packets sent back-to-back, i.e. a train of packets. In this case, the
time dispersion ∆(T) is measured as the difference in arrival time of the last and first packet
in the train, and is therefore a function of the train length T. For a packet train of length T
69
Chapter 5: Link Capacity Estimation
and a time dispersion of ∆(T), we can compute the estimated link capacity as follows: 2
C(T) =(T − 1)L
∆(T)(5.4.1)
The factor (T − 1) stems from the fact that in a train of length T, we have (T − 1) time gaps
between packets, compared to a single gap for a packet pair.
We also implemented packet train probing in the Ryu SDN controller. To do this, a couple
of changes were required to the packet pair probing implementation. First, the Packet-Out
message from the controller contains the same probe packet, but the action list now contains
T packet forwarding instructions instead of just 2.
One of the aims we were trying to achieve with packet train probing was to give the controller
more time to process incoming Packet-In messages with probe packets. We therefore only
sent the first and the last packet of a train to the controller. In order to achieve this, we marked
the first and the last packet of the train. The marking could not be done at the controller, since
this would require the different packets to be sent out via separate Packet-Out messages, in
which case it is not guaranteed that the switch will send out all the packets back-to-back, i.e.
they might be interleaved by other traffic (cross traffic), as discussed in more detail later. We
therefore needed to mark the first and last packet of a train at the switch. For this, we used
the ability of OpenFlow switches to rewrite packet headers, and added an action for the first
and last packet in the train to set the mark. We used the MAC destination address for the
marking, since it is not used for the packet delivery, but we could use any of the other layer
2 packet header fields.
We also needed to modify the forwarding rules for probe packets installed in all switches,
i.e. we needed to add an additional match rule, so that only the probe packets marked as
either first or the last of the train were forwarded to the controller. All other probe packets
were dropped at the switch.
2Since we are now focusing only on time dispersion measurements done at the controller, we will omit thesubscript C.
70
Chapter 5: Link Capacity Estimation
0
2
4
6
8
10
12
14
16
18
20
22
24
26
0 10 20 30 40 50 60 70 80 90 100 110 115
Lin
k C
apac
ity (M
bps)
Distance (d) in meters
Iperf Controller
Figure 5.3: Link Capacity Estimation using Packet Train Probing (T = 40)
5.4.1 Experiments
We used the same experiment scenarios as for the packet pair probing case. In an initial
experiment, we choose a train length of T = 40. As we will see later, this is a reasonably
good choice. Figure 5.3 shows the results. We see that packet train probing achieves
a significantly increased accuracy compared to packet pair probing. The estimate is very
close to the actual link capacity, as measured by Iperf. As mentioned before, the Iperf
measurements were done at a separate time from when the packet train probing occurs. We
used Iperf as a reference for comparison with our packet train probing based estimations.
Therefore Iperf shows the actual wireless link capacity. As before, the graphs show the
average over 10 experiment runs, with 95% confidence intervals.
The obvious question here is "how can we choose the optimal train length?". We investi-
gated this by comparing the estimated link capacity with the actual capacity, for a range of
train lengths T. We performed this experiment initially for a link distance d of 0 m, and a
constant link capacity of around 24 Mbps. The results are shown in Figure 5.4. We see that
increasing T reduces the estimation error, and from a value of T > 40, the estimation closely
matches the Iperf value. We also performed the same experiment for all the link distances d
considered in our previous experiments.
71
Chapter 5: Link Capacity Estimation
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
2 5 10 15 20 25 30 40 50 60 70 80 90 100
Lin
k C
apac
ity (M
bps)
Packet Train Length (T)
Iperf Controller
Figure 5.4: Impact of Train Length T (d = 0m)
Figure 5.5 shows the root mean square error (RMSE) of the estimation, calculated over the
entire set of distances, ranging from 0 m up to 115 m. The figure shows that the estimation
error decreases for an increasing train length, up to a value of around T = 40, from where
no further accuracy gain is visible. The choice of T obviously involves a trade-off between
accuracy and overhead. The longer the train length, the more overhead in terms of probing
traffic is imposed on the link. Figure 5.5 also shows the total probe traffic in kbits for a single
link capacity measurement. It is obvious that the traffic overhead is linear in T. An important
factor which also determines the total traffic overhead of this link capacity measurement
is the frequency in which the probing occurs. This choice depends on the variability of
the wireless link and the requirements of the specific network application that requires link
capacity information. A detailed discussion of this is beyond the scope of this research.
Looking at Figure 5.5, it is clear that a train length of T = 40 represents a good trade-
off between accuracy and overhead. Therefore, we will use this value for the remaining
experiments discussed in this chapter. However, for scenarios with extremely low controller
capacity, this might need to be adapted to maintain a high estimation accuracy.
72
Chapter 5: Link Capacity Estimation
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
2 5 10 20 30 40 50 60 70 80 90 100
0
80
160
240
320
400
480
560
640
720
800
Lin
k C
apac
ity R
MSE
Est
imat
ion
Tra
ffic
Ove
rhea
d (k
bit)
Packet Train Length (T)
RMSE Overhead
Figure 5.5: Estimation RMSE and Overhead as a Function of Train Length (T)
5.4.2 Impact of Cross Traffic
A well-documented problem of packet pair/train probing is the fact that cross traffic can
significantly reduce the estimation accuracy [119]. If probe packets are interleaved with
cross traffic packets, the resulting time dispersion measurement is inflated, resulting in an
underestimation of the link capacity.
We performed a number of experiments to investigate the impact of cross traffic in our SDN-
based link capacity estimation method. For this, we distinguish between cross traffic flowing
in the same direction across the link as the probe packets (forward cross traffic), and traffic
flowing in the reverse direction (reverse cross traffic).
For a first experiment, we again used the scenario shown in Figure 5.1, with link distance
d = 0 and train length T = 40. We injected UDP cross traffic from host H1 to host H2 using
Iperf with varying offered loads, concurrently with the packet train probing. As before, we
ran 10 experiments and calculated the average and confidence intervals.
From the results in Figure 5.6, it is clear that forward cross traffic does not seem to have
a noticeable impact on the estimation accuracy, irrespective of the intensity of the cross
traffic. Our observation revealed that the SDN switch treats the actions sent in a Packet-
73
Chapter 5: Link Capacity Estimation
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Lin
k C
apac
ity (M
bps)
Forward Cross Traffic (Mbps)
Iperf Controller
Figure 5.6: Impact of Forward Cross Traffic (d = 0)
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Lin
k C
apac
ity (M
bps)
Reverse Cross Traffic (Mbps)
Iperf Controller
Figure 5.7: Impact of Reverse Cross Traffic (d = 0)
74
Chapter 5: Link Capacity Estimation
Out message as an atomic operation, and therefore sends out all the probe packets without
interleaving any cross traffic packets.
For our second experiment, we reversed the direction of the cross traffic (reverse cross
traffic), and let it flow from H2 to H1, in the opposite direction of the probe packets. The cor-
responding results are shown in Figure 5.7. In this case, cross traffic has a strong negative
impact on our estimation accuracy. The higher the reverse cross traffic load, the more we
underestimated the link capacity. The reason for this is that the wireless interfaces on switch
S1 and S2 share the link according to the CSMA/CA media access protocol of 802.11.
In this case, switch S1 cannot avoid the interleaving of (reverse) cross traffic packets with
probe packets. The more cross traffic packets are inserted in a train of probe packets, the
greater the measured dispersion time ∆, and consequently the bigger the underestimation
of the link capacity. This is a big problem in traditional packet pair/train probing, and there is
no easy solution.
However, in the context of SDN, we can make use of available flow information to determine
the number R of reverse cross traffic packets that are inserted between the first and last
packet of a packet train. With the knowledge of R, we can compensate for the impact of
reverse cross traffic and we can update Equation 5.4.1 as follows:
C(T) =(T − 1 + R)L
∆(T)(5.4.2)
In our scenario, we obtained R at the controller using OpenFlow to query the port statistics
(received packet count) at port P2 of switch S1 at the time before the first probe packet was
sent, and again after the last probe packet was received. R is simply the difference between
the two counter values.
Figure 5.8 shows the result of the link capacity estimation using Equation 5.4.2. We can see
that this approach can successfully compensate for the error caused by reverse cross traffic.
In a realistic wireless scenario, we can expect cross traffic from multiple nodes. All these
packets received at port P2 of switch S1 will be counted by the corresponding port statistics
75
Chapter 5: Link Capacity Estimation
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Lin
k C
apac
ity (M
bps)
Reverse Cross Traffic (Mbps)
Iperf Controller
Figure 5.8: Impact of Reverse Cross Traffic after Compensation (d = 0)
counter at switch S1, and hence we can compensate for all this reverse cross traffic. Cross
traffic which causes destructive interference and where packets cannot be successfully re-
ceived at the port, are not counted and compensated for. This is what we want, since these
packets, via their destructive interference, do actually reduce the capacity of the link.
As a result, we can say that our proposed packet train probing mechanism for wireless SDNs
is immune to cross traffic induced estimation errors, which is a largely unresolved problem
for traditional packet pair/train probing approaches.
5.5 Conclusions
We have presented the adaptation of packet pair and packet train probing for wireless SDNs,
and have implemented a prototype using the Ryu controller platform. Our emulation-based
experiments show that packet train probing, with an adequate choice of train length T, de-
livers a very promising degree of link capacity estimation accuracy. We also investigated
the impact of cross traffic on our proposed method, which is a well-known and hard to solve
problem in traditional packet pair/train probing. We presented a solution for this problem that
utilises SDN specific features, and have shown its promising performance. Our proposed
76
Chapter 5: Link Capacity Estimation
method can form the basis of a link capacity estimation service in SDN, which is similar in
nature to the widely used topology discovery service. Such a link capacity estimation ser-
vices forms an important building block for our SDN-based routing framework for WMNs, as
discussed later in this thesis.
77
Chapter 6
Testbed Evaluation
6.1 Introduction
Wireless experiments have largely been conducted via discrete event simulation tools, such
as ns-2 and ns-3, or via real wireless testbeds. Both approaches have their respective limi-
tations. Simulation typically requires dedicated implementations of applications and protocol
stacks, and does not allow running real code. Real testbeds are expensive and hard to
manage, have limited flexibility in terms of network topology and scale, and often suffer from
limited reproducibility of experimental results. A hybrid wireless testbed, by combining the
ability to run real application and network protocol code provides a great compromise, with
the flexibly and low cost of emulated wireless links, and the controlled network environment
and reproducibility of results. Figure 6.1 illustrates these three experimental approaches,
and highlights the compromise of hybrid wireless testbeds, which are often referred to as
network emulators. 1
The focus of this chapter is divided into two parts. First, in Section 6.2 we provide the
evaluation of a new hybrid wireless testbed, which we refer to as "Mininet-ns3-WiFi", in terms
of result accuracy, fidelity and scalability. Second, in Section 6.3 we evaluate the suitability of
a real hardware wireless testbed, called "R2Lab" (based at INRIA, Sophia-Antipolis, France)
for multi-hop wireless network experiments. Finally, Section 6.4 concludes the paper.
1A more detailed discussion of the range of experimental platforms for wireless networks is presented in[80].
78
Chapter 6: Testbed Evaluation
ns-3 Traffic Model
ns-3 Protocol Stack
ns-3 Channel
Real Application Code
Linux Protocol Stack
ns-3 Channel
Real Application Code
Linux Protocol Stack
Real Wireless Channel
Simulation
Link Emulation
Test-Bed
Mininet-ns3-WiFi
Figure 6.1: Experimental Platforms for Wireless Networks
6.2 Mininet-ns3-WiFi Evaluation
Mininet [85] is a Linux-based network emulator with the ability to create a wide range of
network topologies with virtual hosts, switches and links. Mininet has built-in support for
OpenFlow switches and is therefore widely used for Software Defined Network (SDN) ex-
periments, providing accurate and reproducible results [60].
Virtual hosts and switches in Mininet run real application and network protocol code, which
gives the experiment result a high degree of credibility. It also allows easy migration of SDN
applications from Mininet to a real network. The only aspect in Mininet that is emulated are
the network links. This provides great flexibility and the easy creation of arbitrary network
topologies. Mininet currently supports wired links, with limited support for controlling link pa-
rameters such as packet loss and delay via the Linux Traffic Control (tc) tool. Unfortunately,
Mininet currently has no support for wireless links and networks.
We believe support for wireless networks in Mininet, in particular IEEE 802.11 (WiFi), opens
up a wide range of experimental opportunities, in particular for exploring Software Defined
Wireless Networks. An integration of WiFi support in Mininet has been provided in [78].
This approach uses the support of ns-3 [106] for integrating simulation components with
real-time environments such as testbeds or virtual machines. In particular, it uses the IEEE
79
Chapter 6: Testbed Evaluation
802.11 physical and MAC layer module of ns-3 to provide wireless link emulation in Mininet. 2
We refer to this approach as Mininet-ns3-WiFi.
While the code for the Mininet-ns3-WiFi integration is provided in [78], no evaluation of this
approach has been provided. Consequently, it has not been used for any significant experi-
ments with published results.
A critical and general problem for scenarios where simulation-based components are inte-
grated in a real-time environment is the potential divergence of simulation and wall-clock
time. This can happen when the processing of simulation events (e.g. ns-3 physical layer
packet propagation) cannot keep up with real-time events. The result is loss of accuracy and
fidelity of the experimental results. This section specifically focusses on this critical aspect,
and provides a mechanism that provides the experimenter with an indicator of the fidelity
and trustworthiness of the obtained results.
The rest of the section is organised as follows. In Section 6.2.1 we discuss key related works.
Section 6.2.2 explains how the integration of Mininet with the ns-3 WiFi module is achieved.
Section 6.3.2 discusses the methodology of our experiments. Sections 6.2.4, 6.2.5 and 6.2.6
present the results of our evaluations. Finally, Section 6.2.7 provides a summary.
6.2.1 Related Work
There is a wide body of works on different types of network emulators. The key works can
be divided into two basic categories, based on their virtualisation technology [26].
The first category uses full virtualisation to emulate end hosts. Depending on the VM size
and the complexity of the emulated network, the scalability is limited and the performance
fidelity is dependent on the hypervisor scheduling [60]. There are different approaches to
address this issue. For example, DieCast [56], which implements full-system virtualisation,
uses time dilation techniques to slow down the progression of real time, in order to allow the
emulation part to keep pace, and hence increase scalability and performance fidelity.
2For this, ns-3 uses a real-time scheduler to lock the simulation clock with the hardware clock. The role ofthe scheduler is to make sure that the simulation clock progresses synchronously with the external time base(wall clock).
80
Chapter 6: Testbed Evaluation
The second network emulation category consists of lightweight emulators or container-
based emulators, which can achieve greater scalability due to their reduced resource de-
mands [133]. These emulators use OS-level virtualisation, such as FreeBSD jails, used by
vEmulab [61], or Linux containers, used in Mininet [85].
Mininet promises to support experiments with several hundred if not thousand of nodes.
However, Mininet cannot guarantee fidelity of results at these scales due to the high compu-
tational load and hence inability of the emulation component to keep up with the progression
of the real-time clock [60].
To address this problem, techniques such as resource isolation and monitoring mechanisms
have been proposed in Mininet-HiFi [60], and a virtual time system based on time dilation
has been proposed in VT-Mininet [141]. However, none of the above mentioned network
emulators provide support for wireless links.
There have been recent attempts to integrate support for wireless links in Mininet. In [49], the
mac802.11/SoftMac device driver is used as a basis for the integration, providing a high level
of fidelity at the MAC layer. The current shortcoming of this approach is the limited realism
in modelling the physical layer effects such as link interference, signal attenuation, etc. The
current approach uses the Linux Traffic Control (tc) tool to provide limited link emulation via
the setting of packet loss and delay parameters.
OpenNet [34] is another recent work, which aims to integrate Mininet and WiFi. Similar to
the approach in [78] that we use as a basis in this chapter, [34] uses ns-3 for WiFi link
emulation in Mininet. However, none of the above works have evaluated the experimental
accuracy, performance fidelity and scalability limits of the integration of Mininet with WiFi. In
this section, we aim to address this important gap.
6.2.2 Integration of ns-3 into Mininet
Ns-3 supports a "real-time/emulation" mode [4], which allows the integration of simulation
code with real-time devices, either real or virtual. This mode provides the synchronisation of
the emulation clock with the real-time (wall) clock.
81
Chapter 6: Testbed Evaluation
Tap
Bridge
Name
Space 1
Tap Device
Protocol
Stack
ns-3 Net
Device
Tap
Bridge
ns-3 Net Channel
Name
Space 2
Node 2
ns-3 Process
Mininet
Node 1
Mininet
Tap Device
Protocol
Stack
Real
Emulated
ns-3 Net
Device
Figure 6.2: Integration of Mininet and ns-3 [78]
Figure 6.2 shows the architecture of the connection of two virtual Mininet nodes, via an em-
ulated ns-3 WiFi channel, using NetDevice and TapBridge interfaces, shown in the center of
the figure. Each Mininet node has its own Linux name space and separate network proto-
col stack, and is connected to the ns-3 channel via a Linux Tap NetDevice. The TapBridge
allows the ns3-channel to connect to the TAP Device of the Mininet nodes. In fact, the TAP
Device interacts with the TapBridge to present itself as a ns-3 NetDevice to the ns-3 simu-
lator. The NetDevice allows ns-3 to interact with an external, real-time network interface by
simulating a layer-2 network interface.
This method allows us to drive a simulated network with the packets from ’real’ nodes, which
are virtual Mininet nodes in our case. The simulated network consists of individual WiFi links
in the case of Mininet-ns3-WiFi.
6.2.3 Experimental Tools
As previously mentioned, our aim is to evaluate the performance fidelity of Mininet integrated
with ns-3 based WiFi link emulation. In particular, we used IEEE 802.11a for our experi-
ments. We used iperf [2] to measure link throughput as well to create link load, e.g. for the
purpose of creating interference, using 1500 byte UDP packets. Our experiment scenarios
82
Chapter 6: Testbed Evaluation
Table 6.1: Software used in Implementation and Experiments
Software Function Version
Mininet [85] Network Emulator 2.1.0
ns-3 [106] 802.11 Link Emulation ns-3.25
Iperf [2] Throughput Measurement 2.05
Linux (Ubuntu) Host Operating System 14.04
Python Programming Language 2.7
were defined as Python scripts and all our experiments were run on a Linux PC with an
Intel i7-2600K CPU running at 3.40GHz, with 8GB of RAM. As a reference, we also ran our
experiment scenarios as pure ns-3 simulations, using ns version 3.25. Table 6.1 provides a
summary of the key software tools and version numbers used in our experiments.
6.2.4 Single Link Scenario
As a first experiment, we consider the scenario shown in Figure 6.3, which consists of a
single wireless link between a sender S1 and receiver R1, separated by distance d. We
measured the maximum achievable throughput for this link for different values of d, ranging
from 0m to 120m in increments of 1m. For each value of d, the throughput was measured
for 60 seconds. Figure 6.4 shows the results for the different fixed OFDM modulation rates
from 6Mbps up to 54Mbps. The results from Mininet-ns3-Wifi look as expected, with the
throughput for small values of d values close to the theoretical maximum throughput [145]
and experimental measurements on real testbeds [149]. Once d approaches the transmis-
sion range, which varies for different OFDM rates, the measured throughput converges to 0,
as expected. Figure 6.5 shows the corresponding results from ‘ns-3 only’ experiments. We
see a very good match of the achieved throughput between Mininet-ns3-WiFi and ns-3. For
example, the Root Mean Squared Error (RMSE) for 6Mbps is 0.18, for 12Mbps it is 0.38, and
for 54Mbps it is 2.98. We can see that error is very small for low OFDM rates, but increases
for higher rates. This is due to the fact that ns-3 struggles to keep up with the high data
rates. We will discuss this problem in more detail later.
83
Chapter 6: Testbed Evaluation
S1 R1d
Figure 6.3: Basic Scenario: Single Link
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 99 102 105 108 111 114 117 120
Th
rou
gh
pu
t (M
bp
s)
Distance (d) in meters
6Mbps9Mbps
12Mbps18Mbps
24Mbps36Mbps
48Mbps54Mbps
Figure 6.4: Throughput vs. Distance in Mininet-ns3-WiFi
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 99 102 105 108 111 114 117 120
Th
rou
gh
pu
t (M
bp
s)
Distance (d) in meters
6Mbps9Mbps
12Mbps18Mbps
24Mbps36Mbps
48Mbps54Mbps
Figure 6.5: Throughput vs. Distance in ns-3
84
Chapter 6: Testbed Evaluation
6.2.5 Link Interference Scenarios
After having evaluated the accuracy of single-link scenarios in Mininet-ns3-Wifi, we now want
to consider more complex cases with multiple links. We are particularly interested in how
well Mininet-ns3-WiFi is able to handle interference between multiple links. We consider two
experiment scenarios, one with sender interference and one with receiver interference. By
sender interference, we mean the limited sending rate of a node due to carrier sensing in the
CSMA/CA protocol. Under receiver interference, we consider the destructive interference of
multiple frames colliding at a receiver node. We carefully construct our experiments in order
to isolate these two cases.
Sender Interference
The scenario for our sender interference scenario is shown in Figure 6.6. We have two links
(S1, R1) and (S2, R2), with a constant distance between sender and receiver of 20m, which
guarantees maximum link throughput if there is no interference. We run iperf on both links
simultaneously with an offered load that guarantees link saturation. We measure the link
throughput on link (S1, R1) for all OFDM modulation rates, and for different values of the
distance d between the senders S1 and S2. The results of our measurements are shown in
Figure 6.7. In order to increase clarity, we only show the results for the OFDM rates of 6,
12, 18, 24 and 54Mbps.
The figure also includes the results from the corresponding experiment in ns-3 as a refer-
ence. While the graph is a bit dense, it clearly shows a number of key points. Firstly, we see
that for a short distance d, the achieved throughput is close to half of the maximum through-
put for all OFDM rates (with the exception of 54 Mbps, which we will discuss below). This
is due to the fair sharing mechanism of CSMA, and the ability of S1 and S2 to carrier sense
each other. Once d reaches the carrier sense range of 222m, the throughput increases to
the maximum value, as established in our single link experiment.
The important point is that the results of Mininet-ns3-WiFi match very closely the ‘ns-3 only’
baseline, with the exception of 54Mbps. The RMSE values for the different OFDM rates are
as follows: 6Mbps: 0.07, 12Mbps: 0.14, 18Mbps: 0.16, 24Mbps: 0.16, and 54Mbps: 1.75.
85
Chapter 6: Testbed Evaluation
Sender
S1R1 S2 R2
Sender dX = 20 m X = 20 m
Traffic FlowTraffic Flow
Figure 6.6: Sender Interference Scenario
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250 260 270 280
Thro
ughput (M
bps)
Distance (d) in meters
6Mbps (Mininet-ns3-WiFi)
6Mbps (ns3)
12Mbps (Mininet-ns3-WiFi)
12Mbps (ns3)
18Mbps (Mininet-ns3-WiFi)
18Mbps (ns3)
24Mbps (Mininet-ns3-WiFi)
24Mbps (ns3)
54Mbps (Mininet-ns3-WiFi)
54Mbps (ns3)
Figure 6.7: Sender Interference Throughput Measurements
The reason for the increased error at 54Mbps is due to high computational load of the ns-3
channel emulation, caused by the high packet arrival rate, and the resulting convergence of
simulation and real-time clocks.
As mentioned before, this is a critical problem for emulation-based experiments that combine
real-time components with simulation-based component. We will address this problem in
more details in the following sections. With the exception of the 54Mbps case, as discussed,
we can say that Mininet-ns3-WiFi very accurately reflects the link interference behaviour as
observed in ‘ns-3 only’ simulation.
86
Chapter 6: Testbed Evaluation
Table 6.2: Host Distance Mapping for each OFDM Rate
OFDM Rate (Mbps) x (m) d (m)
6 90 132
9 75 147
12 75 147
Receiver Interference
In this experiment, we want to evaluate the accuracy of Mininet-ns3-WiFi in terms of receiver
interference. For this, we use the scenario shown in Figure 6.8, which is a slight modification
to our sender interference scenario. Again we have two wireless links, but this time the
direction of transmission on the link (S1, R1) is reversed.
Since we want to isolate the effect of receiver interference only, we make sure that S1 and
S2 are sufficiently far apart to not be able to carrier sense each other. While all OFDM
modulation rates have the same carrier sense range of 222m in ns-3, they have different
transmission ranges. We choose the distance x for both the (S1, R1) and (S2, R2) links so
that the two senders S1 and S2 cannot carrier sense each other, and that both S1 and R1, as
well as S2 and R2 are in transmission range of each other, for the relevant values of d, from
132m to 220m.
Since it was only possible to meet these constraints and isolate the effect of receiver interfer-
ence for the OFDM rates of 6, 9 and 12Mbps, we omitted the other rates for this experiment.
Table 6.2 shows the OFDM rates and the relevant values of x and d.
Figure 6.9 shows the throughput measurement on link (S1, R1) for both Mininet-ns3-WiFi as
well as ns-3. As expected, for small values of d, the signal from S2 interferes with the signal
from S1 at R1. Since both links are saturated, and the senders do not carrier sense each
other, we have continuous destructive interference at R1, reducing the throughput to 0. With
gradual increase of d, the interfering signal from S2 at R1 gets weaker, and the measured
throughput gradually increases, until it reaches the maximum value once d is greater than
the interference range.
We can see that there is again a very good consistency between the Mininet-ns3-WiFi results
87
Chapter 6: Testbed Evaluation
Sender
R1S1 S2 R2
Sender dX X
Traffic Flow Traffic Flow
Figure 6.8: Receiver Interference Scenario
0
1
2
3
4
5
6
7
8
9
133 138 143 148 153 158 163 168 173 178 183 188 193 198 203 208 213 218
Thro
ughput (M
bps)
Distance (d) in meters
6Mbps (Mininet-ns3-WiFi)
6Mbps (ns3)
9Mbps (Mininet-ns3-WiFi)
9Mbps (ns3)
12Mbps (Mininet-ns3-WiFi)
12Mbps (ns3)
Figure 6.9: Receiver Interference Throughput Measurements
and the ns-3 benchmark. The RMSE error for the 3 rates are as follows: 6Mbps: 0.29,
9Mbps: 0.13, 12Mbps: 0.57. In summary, Mininet-ns3-WiFi very accurately reflects the
expected behaviour of wireless links, both for individual links, as well as interference among
multiple links. The observed inaccuracies, especially as seen in Figure 6.7, are due to clock
divergence due to high computational load on the ns-3-based link emulation. We explore
this problem in more details in the following section.
6.2.6 Scalability
As discussed before, network emulation via integrating real-time components with simulation
components can lead to the problem of divergence of simulation time versus wall clock time,
88
Chapter 6: Testbed Evaluation
R1S1
R2S2
X = 0 m
d = 1
00
0 m
Figure 6.10: Scalability Scenario
resulting in loss of accuracy and fidelity in experimental results. The problem occurs if the
required simulation-based processing cannot keep up with real-time events. As we saw in
the sender interference case in Figure 6.7, this tends to happen when a high load is put
on the simulation component via a high packet sending rate. In this section, we explore
this problem in more details and we investigate the scalability of Mininet-ns3-WiFi, i.e. we
investigate how an increased packet processing load impacts on the result fidelity.
Figure 6.10 shows our experiment scenario, with n wireless links, (S1, R1) , (S2, R2), ...,
(Sn, Rn). The ’vertical’ distance between links is 1000m, which guarantees that there is no
interference negatively impacting the link capacity. The distance x between all senders and
receivers is set to 0m to ensure maximum link performance. On all n links we run iperf with
an offered load of 2Mbps. Using a fixed OFDM rate of 6Mbps and the topology shown in
Figure 6.10, each link should have no problem transmitting the offered 2Mbps.
We initially start with n = 1, i.e. with a single link, and measure its throughput. We then
repeat the measurement for different values of n, ranging from 1 to 15, each time measuring
the link throughput of (S1, R1).
Figure 6.11 shows the throughput results, as well as the CPU load for each scenario 3. We
observe that for up to n = 10 links, Mininet-ns3-WiFi reports the expected throughput of 2
3We measure the maximum CPU load across all cores. Since ns-3 is single threaded, it can only utilise asingle core, and showing the average CPU load across all cores would not be meaningful in this context
89
Chapter 6: Testbed Evaluation
0
1
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0
10
20
30
40
50
60
70
80
90
100
Thr
ough
put (
Mbp
s)
CPU
Loa
d (%
)
Number of Links (n)
Throughput CPU Load
Figure 6.11: Throughput and CPU Load vs. Number of Links (n)
0.1
1
10
100
1000
10000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0
10
20
30
40
50
60
70
80
90
100
RT
T (m
s)
CPU
Loa
d (%
)
Number of Links (n)
RTT CPU Load
Figure 6.12: RTT and CPU Load vs. Number of Links (n)
Mbps. For 11 or more links, which corresponds to an aggregate load of 22 Mbps and higher,
the throughput starts to significantly decrease. This is caused by the discrepancy between
simulation and real-time. The CPU simply cannot keep up with the increasing total offered
packet rate and do the necessary link emulation processing. This is clearly shown by the
CPU load in the figure, which gradually increases for larger values of n, and reaches 100%
90
Chapter 6: Testbed Evaluation
for n = 11, coinciding with the drop in measured throughput.
In this simple experiment, it is easy to see when the results start to become inaccurate, since
we have a reference value to compare them to. This is not the case for general experiment
scenarios. In the general case, the experimenter needs a reliable indicator that can tell if the
results can be trusted.
For this, we could add an extra indicator link, removed from all other links in order to avoid
any interference with the actual experiment, on which we run a 2 Mbps iperf session. A
deviation from 2 Mbps of achieved throughput would indicate a divergence of simulation and
real-time, and would tell us that the results of the experiment cannot be trusted.
However, this approach is relatively costly in terms of CPU usage. As a more lightweight
indicator, we propose to also use a dedicated indicator link separated from the actual network
topology, but instead of running iperf, we simply run ping across the link in a regular interval
to measure the Round Trip Time (RTT). The idea is that a divergence of simulation and
real-time would be reflected in an increased RTT.
The results of the corresponding experiment is shown in Figure 6.12. As in Figure 6.11,
the x-axis shows the number of links on which we run 2 Mbps iperf sessions. The y-axis,
in log scale, shows the measured RTT values, averaged over 60 seconds, with one RTT
measurement per second. The reference value is the RTT measured for n = 1, which is
around 1 ms. As before, the figure also shows the corresponding CPU load.
We see that for up to 8 links, there is a gradual increase in the RTT value. For n = 9
and greater, we observe a sharp jump in RTT values. This shows that RTT measurement
is a more sensitive indicator for reduced performance fidelity for Mininet-ns3-WiFi than the
throughput measurement based indicator. The RTT values start to increase significantly well
before the CPU load reaches 100%.
Figure 6.12 is somewhat limited since it only shows average RTT values. To get a better in-
sight into the distribution of the measured RTT values, Figure 6.13 shows the corresponding
Complementary Cumulative Distribution Function (CCDF). The x-axis is the RTT in logarith-
mic scale in ms and the y-axis is the CCDF of the RTT. We can divide the graph roughly into
91
Chapter 6: Testbed Evaluation
3 zones, related to to the degree of RTT error.
We can see in Zone 1 the RTT of packets is less than 4ms, with a relatively modest increase
compared to the baseline. For most experiments, the results can be considered as trustwor-
thy. In Zone 2, with RTT values ranging from 4ms up to 80ms, an experimenter would have
to be very careful in interpreting and trusting the results. We can see that for n = 10, most
RTT values are around 4ms, but there is a relatively long tail with around 20% of values with
significantly larger values. Simply looking at the average does not reveal this.
Finally, Zone 3 contains scenarios with RTT greater than 80ms and up to 10,000ms. Any
result in this region can clearly not be trusted and must be ignored.
In summary, we can say that the proposed RTT based indicator can provide a reliable and
light weight indicator for the fidelity of experimental results in Mininet-ns3-WiFi. The scala-
bility of Mininet-ns3-WiFi, i.e. the total size of the experimental scenario, e.g. in terms of
aggregate traffic load, depends on the CPU speed of the machine on which it is run. A faster
machine will allow to run larger experiments with higher traffic volumes, while still main-
taining accurate results. Unfortunately, ns-3 is currently single threaded, and can therefore
only utilise a single CPU core. Making ns-3 multi-threaded would significantly improve the
scalability of Mininet-ns3-WiFi [3].
6.2.7 Summary
We have presented a first systematic evaluation of integration of WiFi link emulation based
on ns-3 with Mininet. Mininet-ns3-WiFi combines the benefit of real testbeds in terms of
running real application and network protocol code, with the low cost, flexibility and result
reproducibility of discrete-time simulation. It presents a great potential as a platform for a
wide range of wireless experiments, in particular for Software Defined Wireless Networks.
Our results show a high degree of accuracy and fidelity of the results achieved with Mininet-
ns3-WiFi, as long as processing load on the ns-3 based channel emulation is sufficiently low,
and ns-3’s real time scheduler can keep up with external real-time events. We presented a
92
Chapter 6: Testbed Evaluation
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.1 1 10 100 1000 10000
CC
DF
Zone 1 Zone 2 Zone 3
RTT (ms)
1Link
2Links
3Links
4Links
5Links
6Links
7Links
8Links
9Links
10Links
11Links
12Links
13Links
14Links
15Links
Figure 6.13: CCDF of RTT Measurement
simple, lightweight and sensitive fidelity indicator based on RTT measurements, which can
help the experimenter to decide if results are sufficiently accurate.
With respect to the achieved results, we can consider this testbed for wireless multi-hop
experiments with the limited network sizes. However, for large wireless networks we need to
have a look at alternative platform. Therefore, in the next section we explore the experimental
validation of new real hardware wireless testbed, called R2Lab [67].
6.3 R2Lab Testbed Evaluation
R2Lab is a wireless testbed platform located at INRIA 4, Sophia-Antipolis, France. This
platform is part of the FIT federation, which provides an open large-scale, high performance
testing infrastructure for performing experiments on systems and applications on wireless
and sensor communications. R2Lab is an open testbed located in an anechoic chamber
composed of 37 customisable wireless devices, along with USRP (Universal Software Radio
4Institute for Research in Computer Science and Automation.
93
Chapter 6: Testbed Evaluation
Peripheral) [47] nodes and commercial LTE phones for accomplishing reproducible research
in WiFi and 4G/5G cellular networks.
R2Lab is equipped with a range of software tools that allow the control of the wireless nodes
remotely through a ssh gateway. Each user can reserve the whole testbed for his/her ex-
periment and can take full control of all the wireless devices. The user can run his/her own
customised Operating System (OS) on each node in order to perform experiments on cus-
tomised systems. After loading the OS on a node, it is accessible via ssh with administrative
privileges and ready for the user to configure the available resources, such as nodes, USRPs
and phones.
The nodes are positioned in a grid layout, as illustrated in Figure 6.14. Each node is
equipped with 3 wired interfaces used for remote power and reset management, control and
a data channel dedicated to experimentation. This separation of control and data channels
in the testbed makes it a potentially suitable platform for SDN experiments, to be considered
in future work.
The launch of the R2Lab platform was very recent (November 2016 [66]), and consequently
there has been limited use and evaluation of the testbed. In particular, there have been no
wireless multi-hop or WMN experiments conducted on the platform, and our work provides
the first basic evaluation of the platform for this purpose. In this section we present the
results of our experiments, which consist of a comparison of two widely used WMN rout-
ing protocols, i.e. OLSR and BATMAN. Our work, which was conducted during a 5-week
research visit at INRIA, validates the R2Lab testbed as a suitable platform for wireless multi-
hop experiments, with a great potential for SDN-based WMN experiments. Due to the very
recent launch of R2Lab, and hence the resulting lack of time, this remains as future work.
The rest of this section on the R2Lab is organised as follows. In Section 6.3.1 we discuss
key related works, i.e. wireless testbeds. Section 6.3.2 explains our basic experiments to
evaluate the suitability of R2Lab for multi-hop wireless networks. Section 6.3.3 discusses
the more complex evaluation of R2Lab by considering Wireless Mesh Networks routing ex-
periments. Finally, Section 6.3.4 provides a summary and conclusions for this work.
94
Chapter 6: Testbed Evaluation
Figure 6.14: Ground Plan Layout of Nodes
6.3.1 Related Work - Wireless Testbeds
There exists a wide range of work on the design and evaluation of wireless network testbeds
and testbed platforms. Here, we give a very brief summary of some of the key works, to
provide the context of our own work.
ORBIT is an Open Access Research Testbed for Next-Generation Wireless Networks (OR-
BIT) [110, 123], founded in 2003 for conducting reproducible wireless experiments. The
architecture of ORBIT is a two-tier system consisting of a laboratory-based wireless net-
work emulator and a field trial network. This allows the experimenter to perform the basic
experiments on the emulator, which addresses the problem of reproducibility, while at the
same time providing the opportunity for the user to evaluate the performance of applications
and protocols in real-world networks. The ORBIT lab emulator consists of a large number of
static 802.11x wireless nodes laid out in a grid. This radio grid emulator provides facilities for
the user to reproduce wireless network experiments with a specified topology for quantitative
evaluation of different protocols and applications. The user can have full access and control
95
Chapter 6: Testbed Evaluation
to wireless nodes, such as installing their own OS and software packages, rebooting, etc.
The user can move to the field trail network in order to support the validation of the results
obtained on the emulator. In order to create a multi-hop network in ORBIT, it is suggested to
use MAC address filtering or noise generation. The former cannot omit the contention and
interference between multiple senders and the latter is limited in the topologies that can be
achieved. ORBIT has a strong experiment control and management capabilities. One of its
shortcomings is the lack of control of background noise due to the fact that wireless nodes
are not placed in an anechoic chamber.
Emulab [139] is another open access large scale platform for running experiments in com-
puter networking and distributed systems. Emulab has a variety of features, including sup-
port for arbitrary network topologies, full control of nodes with arbitrary OS and configuration,
and support for both WiFi as well as SDR experiments, using USRPs. Emulab also supports
integration with other testbeds such as Planet LAB. As in the case of ORBIT, the Emulab
wireless nodes are not isolated from other wireless networks, so the signal from other wire-
less networks can interfere with any Emulab experiments.
WHYNET (Wireless HYbrid NETwork) [147] is another large-scale hybrid platform for imple-
menting heterogeneous wireless technologies such as WiFi, cellular networks, sensors, etc.
It is a hybrid wireless testbed combining hardware testbeds, simulation and emulation. This
combination provides the ability to take advantage of the benefits of the different experiment
types, i.e. the realism of physical testbeds with the scalability, flexibility and repeatability of
simulation and emulation experiments.
In contrast to these above mentioned testbed platforms, R2Lab provides a wireless hardware
testbed that can avoid background noise and interference of other nearby networks, via the
use of an anechoic chamber, which provides RF isolation. This is shown in Figure 6.15.
We will now present our validation of the R2Lab for evaluation of WMN routing protocols.
6.3.2 R2Lab Experiments
For our experiments to evaluate the suitability of R2Lab for wireless multi-hop experiments,
we considered two WMN routing protocols, namely Optimised Link State Routing (OLSR)
96
Chapter 6: Testbed Evaluation
Figure 6.15: R2lab Located in an Anechoic Chamber
and Better Approach To Mobile Ad hoc Networking (BATMAN) as explained in Chapter 2. We
have evaluated these two WMN routing protocols in terms of Packet Delivery Ratio (PDR)
and end-to-end latency in two different network scenarios, one without and one with the
addition of interference.
As mentioned before, the R2Lab testbed consists of 37 nodes. Each node includes the
state-of-the-art motherboard including an Intel Core i7-2600 processor, 4GB RAM, 240 GB
SSD, and is equipped with 2 wireless interfaces, with Atheros 802.11 93xx a/b/g/n and/or
Intel 5300 chips, and with 3 antennas each. The WiFi mode used in this experiment was
802.11a. Figure 6.14 shows the location of the nodes, which are distributed in a roughly
90m2 area. The distance between nodes is about 1m in each direction, with some exceptions
near the columns that are supporting the room. We installed Ubuntu Linux 16.04 on each of
the nodes.
The first challenge that we had to address was the creation of multi-hop topology. Using the
default settings, each node can see every other node, which results in only single-hop paths.
By minimising the transmission power to 0dBm and setting the transmission rate to a fixed
97
Chapter 6: Testbed Evaluation
1 31
4
19
5
33
12
37
27
15
Figure 6.16: Our Customised Mesh Topology
54 Mbps, we were able to reduce the transmission range. We then ran ping between every
node pair in the topology to find all the active (single-hop) links. We then chose a subset of
10 nodes, which maximises the multi-hop nature of the network. The resulting topology is
shown in Figure 6.16.
For example, for node 1 to reach node 37, packets need to traverse at least 3 hops, i.e. via
the following path: 1 –> 19 –> 27 –> 37. All our experiments have been conducted using
this topology.
We also measured the achievable throughput for different source destination pairs using
iperf. We ran iperf with an offered load that guarantees link saturation. We measured the
throughput of the network paths between node 1 and every other node. The results of our
measurements are shown in Figure 6.17. We see that for short distances (i.e. one-hop), as
shown on the x-axis, the achieved throughput is between 18 and 20 Mbps, which is close to
the maximum achievable throughput for 54 Mbps WiFi OFDM. Once the distance between
the source and destination increases to two hops, the throughput decreases to roughly half
(i.e. around 9 Mbps), due to the fact that radio interfaces operate in half-duplex mode and
cannot receive and send data simultaneously. When the path length reaches three hops,
the throughput decreases further, to a value of about 3.15 Mbps. This result reflects the
expected behavior of wireless multi-hop networks [43, 15], and provides a basic validation
of the R2Lab testbed for the use of wireless multi-hop experiments.
98
Chapter 6: Testbed Evaluation
0
2
4
6
8
10
12
14
16
18
20
1->4(1-hop)
1->5(1-hop)
1->12(1-hop)
1->19(1-hop)
1->31(1-hop)
1->15(2-hops)
1->27(2-hops)
1->33(2-hops)
1->37(3-hops)
Th
rou
gh
pu
t(M
bits/s
ec)
Sender-Receiver pair
Figure 6.17: Throughput vs. Distance over Different Number of Hops
In the following we will consider more complex validation experiments, and for this we will
consider the WMN routing protocols BATMAN and OLSR, as mentioned earlier.
6.3.3 WMN Routing Experiments
In this section, we considered two basic experiment scenarios, a case with no interference
and one with interference. We evaluated BATMAN and OLSR in terms of the end-to-end
latency and packet delivery ratio for these two scenarios.
In our experiments, we used olsrd version 0.6.6.2 from olsr.org [108] with the default con-
figuration, with the Link Quality (LQ) extensions enabled. This means that the Expected
Transmission Count (ETX) was used as the routing metric [39].
For BATMAN, we used batmand version 0.3.2-17 on each node from open-mesh.org [104].
As mentioned before, BATMAN also has an awareness of the link/path quality and tries to
avoid the routes with lower packet delivery rate.
99
Chapter 6: Testbed Evaluation
No Interference
As an initial experiment, we wanted to discover the paths that both BATMAN and OLSR
establish in our network topology. For this, we ran each of the two protocols on all nodes
(consecutively) and we used the traceroute command to establish all the routes for all source
destination node pairs.
As a simple example, we showed how BATMAN and OLSR choose routes between node 1
as a source and all other nodes as destination in the network. As a representative example,
Figure 6.18 shows the established routes between node 1 an all other nodes in the network
for OLSR (Figure 6.18(a)) and BATMAN (Figure 6.18(b)).
One-hop paths are indicated with black dashed lines, blue lines indicate 2-hop paths and red
lines 3-hop paths respectively. Since some paths overlap, we added labels for each path,
which indicate the sequence of nodes, in order to increase clarity. For example, in Figure
6.18(a) if node 1 wants to reach node 37 using OLSR, packets will go through nodes 4 and
27. This has been shown with the label (1-4-27-37). In the case of BATMAN, as shown in
Figure 6.18(b), the corresponding path is (1-5-33-37). It is clear from our experiments that
OLSR and BATMAN can choose different routes in the exact same network, in some cases
with significant differences in path length. This is not surprising, since both protocols vary
significantly in regards to their approach to route establishment.
After having considered the basic route establishment of the two WMN routing protocols, we
considered their performance in terms of latency and packet delivery ratio (PDR). In order to
measure the end-to-end latency of the paths established by both OLSR and BATMAN, we
used ping (5000 measurements) between every source destination node pair.
As before, we used paths with node 1 as the source as a representative example, in partic-
ular, we considered the following source destination pairs, which represent a 1-hop, 2-hop
and 3-hop path respectively: (1,19), (1,27) and (1,37). Figure 6.19 shows a box and whisker
plot of measured RTT values for these 3 paths, both for OLSR and BATMAN. The plot shows
the median, the maximum and minimum, as well as the four quartiles.
In Figure 6.19, the y-axis shows the RTT value of the routes in ms and the x-axis shows the
100
Chapter 6: Testbed Evaluation
1
31
4
19
5
33
12
37
27
15
(1-4-15),(1-4-27-37)
(1-5-33)
(1-12)
(1-19)
(1-4-15)
(1-4-27-37)
(1-4-27-37)
(1-5-33)
(1-31)
(a) OLSR Routing Protocol
1
31
4
19
5
33
12
37
27
15
(1-4)
(1-5-15),(1-5-33-37)
(1-12)
(1-19-27)
(1-5-33-37)(1-5-33-37)
(1-31)
(1-19-27)
(1-5-15)
(b) BATMAN Routing Protocol
Figure 6.18: Established Route Between Node 1 as Source and Other Nodes as Destinations
101
Chapter 6: Testbed Evaluation
0
1
2
3
4
5
6
(1-19)BATMAN
(1-19)OLSR
(1-27)BATMAN
(1-27)OLSR
(1-37)BATMAN
(1-37)OLSR
RT
T(m
s)
Figure 6.19: Distribution of RTT Values without Interference
source-destination pairs and the corresponding routing protocol. As expected, the RTT value
increases with the path length. We see a roughly similar result for both OLSR and BATMAN
for the three source destination pairs, despite the fact they chose different paths. This is as
expected, since the path length is identical, and the link quality of the corresponding hops is
also similar.
We also measured the Packet Delivery Ratio (PDR) for all the network paths. However, due
to lack of mobility and significant levels of interference, we did not observe any packet loss,
and all paths achieved a PDR value of 100%. To consider a more interesting scenario with
packet loss, we looked at a scenario where interference is artificially generated by a node in
the R2Lab testbed. We will explore this in the following section.
With Interference
As mentioned before, some nodes on the R2Lab platform are equipped with a USRP device.
We used one of these devices to generate noise in the network. In particular, we used
the USRP installed on node 11, and with the help of the "uhd_siggen" Linux command we
generated Gaussian random noise output with 70dB gain.
102
Chapter 6: Testbed Evaluation
This noise generating node is located between nodes 1 and 19 and above node 12, as
shown in Figure 6.14. So we excepted these three nodes, and corresponding links, to be
affected by the interference.
First, we were interested to see how the interference has an impact on the established
routes between our three selected source destination node pairs, i.e. (1,19), (1,27) and
(1,37). The result is shown in Figure 6.20. Figure 6.20(a) shows the result of OLSR and
the routes established by BATMAN are shown in Figure 6.20(b). The black dashed line
represents one-hop paths, blue represents two-hop paths, red represents 3-hop paths, green
represents 4-hop paths, and finally orange represents 5-hop paths. We can see the injection
of interference results in the establishment of longer paths, since both OLSR (with the ETX
routing metric) and BATMAN avoid shorter but lower quality paths.
If we consider the path between nodes 1 and 19 for example, we can see that the one-hop
path in the scenario without interference has now been replaced with a 5-hop path, going
via the following nodes: 1, 4, 5, 33, 27, 19, as shown in Figure 6.20.
We noticed that the routes chosen by BATMAN and OLSR can vary for different experiment
runs. Figure 6.20 shows one example scenario, and Figure 6.21 shows another. For exam-
ple, in Figure 6.20 we see that both OLSR and BATMAN choose an identical path between
node 1 and node 19, but in Figure 6.21 we see two significantly different paths for the node
pair. This explains the significant difference in the PDR values of OLSR and BATMAN for
this node pair.
We performed the same experiments to measure path RTT and PDR, as we have done in
the case without interference. Figure 6.22 shows the RTT results, considering the paths from
node 1 to nodes 19, 27 and 37 respectively, as in our previous scenario without interference.
We see a significant increase in RTT values overall, with some values of well above 1000 ms.
(Note that the y-axis (RTT) is in logarithmic scale here.) This is due to two main reasons. The
first is the significant increase in path length, as discussed above. The second reason is the
lower quality of wireless links, which results in a higher number of packet retransmissions.
We observe that both OLSR and BATMAN are similarly impacted by the increase in RTT
due to interference. Again, this is as expected, since both protocols aim to avoid low quality
103
Chapter 6: Testbed Evaluation
1
31
4
19
5
33
12
37
27
15
(1-4-5-15),(1-4-5-33-27-19),(1-4-5-33-31),
(1-4-5-33-27-37)
(1-12)
(1-4-5-33-27-37)(1-4-5-33-27-19),(1-4-5-33-31),
(1-4-5-33-27-37)
(1-4-5-33-31)
(1-4-5-33-27-19)
(1-4-5-15)
(1-4-5-33-27-19),(1-4-5-33-27-37)
(1-4-5-15),(1-4-5-33-27-19),(1-4-5-33-31),
(1-4-5-33-27-37)
(a) OLSR Routing Protocol
1
31
4
19
5
33
12
37
27
15
(1-4-5-15),(1-4-5-33-27-19),
(1-4-27-33),(1-4-5-33-31),(1-4-27-37)
(1-12)
(1-4-27-37)
(1-4-5-33-27-19),(1-4-5-33-31)
(1-4-5-33-31)(1-4-5-33-27-19)
(1-4-5-15)
(1-4-5-15),(1-4-5-33-27-19),(1-4-5-33-31)
(1-4-27-33),(1-4-27-37)
(b) BATMAN Routing Protocol
Figure 6.20: Established Route Between Node 1 and Other Nodes After Applying Interference
104
Chapter 6: Testbed Evaluation
1
31
4
19
5
33
12
37
27
15
(1-4-5-15),(1-4-5-27-19),
(1-4-5-27-33-31),(1-4-5-33-37)
(1-12)
(1-4-5-33-37)(1-4-5-33-37)
(1-4-5-27-33-31)
(1-4-5-27-19)
(1-4-5-15)
(1-4-5-15),(1-4-5-27-19),
(1-4-5-27-33-31),(1-4-5-33-37)
(1-4-5-27-19),(1-4-5-27-33-31)
(1-4-5-27-19),(1-4-5-27-33-31)
(a) OLSR Routing Protocol
1
31
4
19
5
33
12
37
27
15
(1-4-5-15),(1-4-27-19),(1-4-5-33-31),
(1-4-5-33-27-37)
(1-12)
(1-4-5-33-27-37)
(1-4-5-33-27-37)
(1-4-5-33-31)(1-4-27-19)
(1-4-5-15)
(1-4-5-15),(1-4-5-33-31),
(1-4-5-33-27-37)
(1-4-27-19)
(b) BATMAN Routing Protocol
Figure 6.21: Established Route Between Node 1 and Other Nodes After Applying Interference
105
Chapter 6: Testbed Evaluation
1
10
100
1000
10000
(1-19)BATMAN
(1-19)OLSR
(1-27)BATMAN
(1-27)OLSR
(1-37)BATMAN
(1-37)OLSR
RT
T(m
s)
Figure 6.22: Distribution of RTT Values with Interference
0
20
40
60
80
100
120
01-04 01-05 01-12 01-15 01-19 01-27 01-31 01-33 01-37
PDR(%
)
src-dst
BATMAN OLSR
Figure 6.23: PDR Value with Interference for Node 1 as Source and Other Nodes as Destina-tions
paths, and therefore establish significantly longer paths.
We also measured the PDR on these paths, as in our previous scenario without interference.
Figure 6.23 shows the results. The graph shows the average of 10 experiment runs, with the
95% confidence interval.
106
Chapter 6: Testbed Evaluation
The x-axis shows the considered source-destination pair. The y-axis shows the PDR value in
percent. In contrast to the scenario, where we did not have an interference, we now observe
paths with less than 100% PDR. Overall, BATMAN and OLSR perform similarly, with the
exception of source destination pair (1,19), where the path chosen by OLSR achieves a
PDR of below 40%, while BATMAN achieves a PDR of greater than 90%. This can be
explained by the different approaches of considering link/path quality by the two protocols,
i.e. OLSR uses link state routing with the ETX metric, while BATMAN does consider path
quality based on the best one-hop neighbour. Overall, the achieved results are as expected,
and hence validate the R2Lab testbed as a valid platform for wireless multi-hop experiments.
6.3.4 Summary
We have presented a first experimental validation of the recently launched R2Lab wireless
testbed platform, in regards to wireless multi hop experiments. In particular, we have per-
formed a basic evaluation of the OLSR and BATMAN WMN routing protocols, in terms of
latency and PDR. We considered a scenario without interference, and a scenario where we
injected interference, and we observed the impact on the route selection for both protocols.
Overall, the obtained results give us great confidence about the suitability of the R2Lab
testbed for experiments in Wireless Mesh Networking. Given the separation of the data
plane and control plane in the R2Lab architecture, we believe it provides a great potential for
experiments for SDN-based WMNs, as considered in this thesis. Due to lack of time (R2Lab
was launched in November 2016), this remains to be explored in future work.
6.4 Conclusions
In this chapter, the evaluation of two key testbed platforms has been provided. First, we have
provided an extensive evaluation of Mininet-ns3-WiFi, a new hybrid testbed that combines
link and network emulation with running of real application and network protocol code, which
provides many benefits in the context of SDN and WMNs. Our first evaluation of this platform
has shown the potential as well as the limitations. The limitations are largely in terms of scal-
ability. When the network size and/or traffic reaches a certain threshold, the ns-3 real-time
107
Chapter 6: Testbed Evaluation
scheduler cannot no longer keep up with real-time events, leading to inaccurate experiment
results. This is a general problem in emulation-based systems such as Mininet, but it is par-
ticularly critical in Mininet-ns3-WiFi. In order to deal with this problem, we have developed a
simple and low-cost method, which provides the experimenter with an indicator of the fidelity
and trustworthiness of the results. Given the scalability limitations of Mininet-ns3-WiFi, we
were only able to use the platform for small scale experiments. However, we see a great
potential for the platform for a wide range of wireless multi-hop experiments, if the current
limitations can be overcome. For example, adding multi-threading support in ns-3 should
improve the scalability. However, this was beyond the scope of this thesis.
Furthermore, we evaluated the R2Lab wireless testbed platform at INRIA Sophia-Antipolis,
France. This testbed has been very recently launched, and we presented the first basic
evaluation of the testbed for wireless multi-hop experiments, using traditional WMN routing
protocols, namely BATMAN and OLSR. Our results demonstrated the potential for SDN ex-
periments. Due to lack of time, more detailed studies of this remains to be done as future
work.
108
Chapter 7
SDN-based WMN Routing
7.1 Introduction
As mentioned before, SDN has gained a lot of momentum in computer networking due to its
potential to increase network performance, efficiency and programmability in wired networks,
especially in data centres and WANs [72]. One of the key goals of this thesis is to explore
the application of SDN to wireless networks, and in particular Wireless Mesh Networks. The
specific aim is to leverage the unique SDN features of centralised view of network state,
fine grained control of network traffic, combined with a higher level of abstraction, to achieve
simple and efficient routing and traffic engineering in WMNs. A key goal is to hide the
complexity of routing from the user by using a new, constraint programming-based SDN
northbound interface. This allows a user to express high level routing policy (the ’what’),
without having to worry about the low level details of the implementation and realisation (the
’how’).
As has been discussed in Section 3.1.4, a number of papers have addressed the potential
application of SDN concepts to Wireless Mesh Networks. However, these works have been
limited to relatively narrow use cases, and have not fully explored the potential of SDN for
routing in WMNs. In particular, none of the related works have truly embraced and leveraged
the core ideas of SDN, such as the use of higher levels of abstraction via a new northbound
interface. The ability to provide new higher levels of abstraction is considered the key ben-
efit of SDN, according to key proponents of the technology, such as Martin Casado, Scott
109
Chapter 7: SDN-based WMN Routing
Shenker and others [132, 96, 131].
The concepts introduced in this chapter build on the work presented in previous chapters,
such as the efficient Topology Discovery mechanism presented in Chapter 4. Having an
efficient mechanism to provide an up-to-date view of the network topology is critical for im-
plementing an SDN-based routing solution. Furthermore, in this context, it is also important
to have information about the capacity of the wireless links. In contrast to wired networks,
where the link capacity can be assumed as known and largely static, the link capacity of
wireless networks can be highly dynamic. An SDN-based solution to estimate wireless link
capacity was presented in Chapter 5. Having a complete view of the network state at the
logically centralised SDN controller allows us to formulate routing in WMNs as a constrained
optimisation problem, which forms the core idea of this chapter. The experimental evalua-
tions of our new SDN-based WMN routing approach presented in this chapter are informed
by our investigations of wireless testbeds discussed in Chapter 6.
To implement our new SDN-based routing for WMNs, we leverage SCOR (Software-defined
Constrained Optimal Routing) [86], a new SDN northbound interface aimed at QoS routing
and traffic engineering. While the author of this thesis has contributed towards the devel-
opment of SCOR, the SCOR platform itself is not claimed as a contribution of the thesis.
The key contribution of this thesis, as presented in this chapter, is the application of SCOR
to implement efficient routing in Wireless Mesh Networks, with minimum complexity for the
user.
In summary, the key contributions of this chapter includes a novel, constraint programming
based formulation of the WMN routing problem using SDN. We have demonstrated the fea-
sibility and simplicity of this approach using SCOR, a new SDN northbound interface which
provides high level primitives for routing. We have done this via a proof-of-concept im-
plementation of minimum delay routing and maximum residual capacity routing as our two
use cases. We have provided extensive experimental evaluation for these two example
scenarios, and have shown that significant performance improvements can be gained over
traditional WMN routing approaches.
The rest of this chapter is organised as follows. Section 7.2 gives a brief introduction of
SCOR, its design and implementation for SDN-based routing in WMNs. Section 7.3 presents
110
Chapter 7: SDN-based WMN Routing
the first use case of WMN routing via SCOR, i.e. least cost path routing, and its experimental
evaluation. Section 7.4 presents our second use case, maximum residual capacity routing,
and its experimental evaluation. Finally, Section 7.5 concludes the chapter.
7.2 Software-defined Constrained Optimal Routing (SCOR)
In this section, we introduce SCOR, our new SDN northbound interface for QoS routing and
Traffic Engineering (TE) [86]. SCOR is based on Constraint Programming (CP) techniques
and is implemented in the MiniZinc modelling language [103] to provide Software-defined
Constrained Optimal Routing (SCOR).
The main idea behind using CP methods in SCOR is to provide a level of abstraction and
hide the complexity from the user. A powerful aspect of SCOR, which inherits from constraint
programming, is the separation of the problem formulation and its solution. SCOR’s layer of
abstraction hides the complexity of solving the problem from the user, and therefore greatly
simplifies the implementation of new routing and traffic engineering applications. The in-
creased level of abstraction and simplicity do not come at all at a cost of reduced efficiency.
Through the use of powerful generic constraint programming solvers, solutions to complex
routing problems can be found faster than through traditional procedural programming solu-
tions, in some cases by orders of magnitude, as shown [86].
SCOR consists of different basic constraint programming building blocks (i.e. predicates)
built specifically for routing. A key feature of SCOR is that it is declarative, where only the
constraints and utility function of the routing problem need to be expressed. The complexity
of solving the problem is hidden from the user, and is handled by a powerful generic solver.
In this section, we first provide a brief background of constraint programming and its key
concepts. Then we introduce the SCOR framework and its key building blocks and predi-
cates.
111
Chapter 7: SDN-based WMN Routing
7.2.1 Background: Constraint Programming
Constraint programming (CP) techniques were initially introduced in the 1960s and 1970s
in artificial intelligence and computer graphics [93]. They have found applications in many
fields such as operations research, programming languages and databases [125]. The main
idea behind CP is to separate the expression of a problem from its solution. Users are only
required to state the problem and the solution is found by the general purpose constraint
solvers, which are designed for this purpose.
This allows for very flexible modelling of problems and efficient solutions, for large, particu-
larly combinatorial problems [27]. In order to use CP to solve a real world problem, it must
be stated in the form of a CP model. A CP model includes at least three parts:
• Decision variables that represent tasks, metrics or resources of a real world problem.
• Variable domains that are a finite set of possible values for each decision variable.
• Constraints that state the relations (conditions, limitations, properties and bounds) be-
tween decision variables.
The constraints in fact restrict the values that all decision variables can have for a partic-
ular solution [27]. The solution of a CP model is the allocation of values for the decision
variables from their domains that simultaneously satisfy all the constraints. Accordingly, CP
problems are called Constraint Satisfaction Problems (CSPs). The solver can provide a sin-
gle solution, the first one that satisfies the constraints of all possible solutions, or the one
that maximises or minimises a provided objective function. If such an objective function is
defined, the problem is called a constrained Optimisation Problem (OP) [27]. The solvers
enumerate possible variable-value combinations intelligently and search the solution space
either systematically or through some forms of complete or incomplete search methods. The
performance of these search methods depends on the statement of the problem [125].
112
Chapter 7: SDN-based WMN Routing
7.2.2 SCOR Framework
Figure 7.1 illustrates the SDN-based WMN routing framework with the integration of SCOR
between the application and control layer. The SDN control layer, or Network Operating
System (NOS), communicates with forwarding elements at the infrastructure layer via Open-
Flow.
One of the SDN controller’s key roles is to gather network state information, typically the net-
work topology via the Topology Discovery module present in most SDN controller platforms.
In addition, we also have a Link Monitoring module, which in our case collects information
about the capacity of the wireless network links, as well as the current traffic load on the
various links. This information is critical in order to make optimal routing decisions. Both
topology and link state information are continuously gathered and made available to the
SCOR layer. An efficient topology discovery mechanism, suitable for wireless networks was
discussed in Chapter 4, and an SDN-based wireless link capacity estimation method was
presented in Chapter 5.
SCOR sits between the SDN control layer and the application layer, and consists of two
sub-layers, which represent two levels of programming abstractions. The bottom level is
the generic CP-based Programming Language, and the higher level is the set of predicates,
which form the routing application interface.
Finally, the top layer of the framework is the application layer, where network applications and
services such as routing and load balancing reside. Further details of SCOR, in particular its
key building blocks (predicates) that provide a high level interface for routing, are discussed
in the following.
7.2.3 SCOR Predicates
SCOR consists of a number of key building blocks, called predicates, which provide the crit-
ical abstractions required to model routing problems using constraint programming. Below,
we discuss the subset of SCOR predicates that we have used in our work on WMN routing.
113
Chapter 7: SDN-based WMN Routing
Network Operating System
Application Layer
Control Layer(Network Operating System)
Infrastructure Layer
Wireless Link
Monitoring
Topology Discovery
Simple Packet
Forwarding Hardware
Routing load balancing
Network Application
Southbound
interface
<-
No
rth
Bo
un
d In
terf
ace
->
. . .
<------------SDN
-based
WM
N R
ou
ting Fram
ewo
rk------------>
Simple Packet
Forwarding Hardware
Simple Packet
Forwarding Hardware
Simple Packet
Forwarding Hardware
Simple Packet
Forwarding Hardware
Network Path Residual CapacityPath Cost
CP-based Programming Language
SCO
R
Figure 7.1: SDN-based WMN Routing Framework
A more complete discussion of SCOR and its predicates is available in [86].
• Network Path Predicate
The most basic concept in routing that we need to model in SCOR is that of a network
path, i.e. a sequence of links, which connect two nodes. The network path predicate
defines a loop-free path from a source to its destination. All routing applications in
SCOR rely on this predicate for the network path definition.
Our implementation of the network path predicate is based on the flow conservation
rule that states the total traffic exiting a node is equal to traffic entering the node, unless
the node is either the source or the destination.
We assume a directed graph G(N ,L), representing a network topology. The set of
network nodes is represented by N , and L = {(u, v)|u, v ∈ N} represents the graph
arcs i.e. network links. Links are assumed multi-weighted with the weights being
scalars representing various link parameters such as capacity, delay and cost. In a
114
Chapter 7: SDN-based WMN Routing
networking context, the flow f (u, v) of a link (u, v) represents the amount of traffic it is
carrying in bits per second, and is represented as a non-negative real number. Given
the directed nature of G, f (u, v) is not necessarily the same as f (v, u). With this, the
flow conservation rule can be stated as follows:
∑{v|(u,v)∈L}
f (u, v)− ∑{v|(u,v)∈L}
f (v, u) =
1 i f u = s,
−1 i f u = t,
0 otherwise
(7.2.1)
Eq. 7.2.1 states that for a single unit flow, except for the source s and destination t,
the sum of the flows arriving at each node is equal to sum of the flows leaving it. This
constraint defines a loop-free network path, i.e. a contiguous list of links connecting a
source and a destination node. 1
• Residual Capacity Predicate
The residual capacity (or available capacity) is an important concept in a lot of routing
algorithms, and is simply the difference between the link capacity and the amount of
traffic the link is currently carrying.
Placing flows on a link is only possible if there is enough (available/residual) capacity
on that link. This definition is represented in routing algorithms as a constraint called
capacity constraint, which limits the traffic to an upper bound as [30]:
f (u, v) ≤ c(u, v) ∀(u, v) ∈ L (7.2.2)
c(u, v) represents the capacity of link (u, v).
For a network with multiple concurrent flows, the residual capacity r(u, v) of a link (u, v)
is defined as follows:
r(u, v) = c(u, v)− f (u, v) ∀(u, v) ∈ L (7.2.3)
Here, f (u, v) represents the total, aggregate flow on the link, and is defined as the sum
of all the individual flows fkxk(u, v), with flow index k ranging from 1 to the total number
of flows K:1The equation expresses the rule for a unit flow, i.e. for f (u, v) = 1, but this can easily be generalised for
any flow value.
115
Chapter 7: SDN-based WMN Routing
f (u, v) = ∑k=1..K
fkxk(u, v) ∀(u, v) ∈ L (7.2.4)
xk(u, v) ∈ {0, 1}
Here, xk(u, v) is a binary variable that indicates if flow k is passing through link (u, v)
or not.
• Path Cost Predicate
The cost of a network path is an important concept utilised in many routing algorithms.
A wide range of network parameters, including additive parameters such as hop count
and delay, concave parameters such as bandwidth, or multiplicative parameters such
as packet loss ratio are applied in routing algorithms and protocols as cost metrics. The
path cost predicate assumes an additive cost metric, such as in the case of Routing
Information Protocol (RIP) [59], which uses hop count metric. As we will show, it can
also be used for other additive metrics such as delay.
This predicate defines the cost a(Pk) of a path Pk as follows:
a(Pk) = ∑(u,v)∈L
a(u, v)xk(u, v) (7.2.5)
where xk(u, v) =
1 i f (u, v) ∈ Pk
0 otherwise(7.2.6)
Here, a(u, v) is the link cost of link (u, v), and xk(u, v) is a binary variable that indicates
if link (u, v) is part of path Pk.
In the following, we will discuss our implementation of these SCOR predicates in the MiniZinc
language [103], as well as the implementation of the SCOR framework in the POX SDN
controller platform.
116
Chapter 7: SDN-based WMN Routing
7.2.4 SCOR Predicates Implementation
SCOR is implemented in MiniZinc [103], which is a declarative Constraint Programming (CP)
modelling language. It comes with a set of pre-packaged solvers, but it can be used with a
range of other solvers as well. The ease of implementation, simplicity, expressiveness and
compatibility with many solvers has made it a good choice for the basis of SCOR. MiniZinc
includes a rich library of global constraints, which model high-level CP abstractions [103].
A problem is stated in MiniZinc in two parts, the model and model data. The model uses pa-
rameters, decision variables and constraints to describe the structure of a CP problem. The
model data includes the values of static parameters that are determined when the problem
is defined, e.g. in a separate file. The values of decision variables are undecided and they
are determined by the solver. Different instances of model data can be used with a single
model to cover various problem scenarios [102].
MiniZinc includes predicates that are similar to functions or methods in procedural program-
ming languages for creating abstractions, modularity and code reuse. Predicates define
higher level constraints and can be included in a model using an include statement (e.g.
include "globals.mzn";). MiniZinc expressions, syntax and a wide range of examples are
explained in detail in [92].
We now discuss the MiniZinc implementation of the Network Path predicate, discussed in
the previous section, as an example. The Network Path predicate is the most complex one,
and is representative of the translation into MiniZinc code.
The MiniZinc code that implements the flow conservation rule, as defined in Eq. 7.2.1 and
hence implements the network path predicate, is shown in Predicate 1.
Here, Nnodes and Nlinks represent the number of nodes and links respectively, and s and t
are the source and destination nodes of the flow. Links is a 2-dimensional array, with each
row representing a link. In our implementation, a row (or link) k consists of the following four
elements [uk, vk, w1, w2], with uk and vk representing the source and destination node, and
w1 and w2 representing 2 link weights. The choice of 2 for the number of link weights is
arbitrary, and can easily be extended to any required number. LPM (Link Path Membership)
117
Chapter 7: SDN-based WMN Routing
Predicate 1 network path1 : forall(i in 1..Nnodes)(
2 : node_ f low_in[i] = sum(k in 1..Nlinks)
(if Links[k, 2] = i then LPM[k] else 0 endif)∧
3 : node_ f low_out[i] = sum(k in 1..Nlinks)
(if Links[k, 1] = i then LPM[k] else 0 endif)∧
4 : node_ f low_in[i] + (if i = s then 1 else 0 endif) =
node_ f low_out[i] + (if i = t then 1 else 0 endif)∧
5 : node_ f low_in[i] <= 1)
is an array of binary decision variables, that indicates which links belong to a path.
LPM[i] = 1 means that link i belongs to the path, and LPM[i] = 0 means that it does not.
In lines 2-4 of Predicate 1, the flow conservation rule applies to all nodes. Line 2 defines the
total flow arriving at node i. This is done by summing up the link-path-memberships LPM
of the links in which node i is the sink node, i.e. Links[k, 2] = i. (The Boolean operator ∧
represents conjunction, i.e. logical AND.) Line 3 defines the total flow leaving node i in a
similar manner. Line 4 applies the equality constraint of the flow arriving and departing each
node, with the exception of the source and sink nodes.
The definition of a path via the flow conservation rule expressed in lines 1-4 does not prohibit
routing loops. The additional constraint in line 5, which says that a flow can arrive at a node
only once, guarantees paths are loop free. For clarity’s sake, we have discussed the network
path predicate for a single flow only. However, our implementation supports any number of
concurrent flows.
7.2.5 SCOR Framework Implementation in POX SDN Controller
We have implemented the SCOR framework as a component on the POX SDN controller
platform [94]. However, the implementation does not rely on any POX specific feature, and
can easily be adapted to other SDN controller platforms [86].
118
Chapter 7: SDN-based WMN Routing
The process and steps of flow calculation and installation by the controller via SCOR is
outlined in the following:
• When a packet arrives at an SDN switch that does not match an existing flow, it is sent
to the SDN controller encapsulated in an OpenFlow Packet-In message, which is the
default OpenFlow behaviour.
• The controller extracts the flow specifications (network protocol, source and destination
addresses, and transport layer source and destination ports) from the packet.
• The controller passes this information, together with network state information such as
network topology, link capacities, current traffic load etc., to the SCOR layer.
• SCOR converts this information to a MiniZinc data file. The MiniZinc model is created
from the SCOR specification of the type of routing we want to implement, e.g. shortest
path routing, minimum delay routing, etc. In the current implementation, this is pro-
vided as a MiniZinc file, using our defined SCOR routing predicates. In the future, this
information could be provided via a Graphical User Interface.
• The SCOR calls the chosen CP solver (via a command line interface) and passes
the model and data files as arguments of the command, specifying the constrained
optimisation problem to be solved.
• The solver finds a solution to the problem, i.e. a flow, expressed as a sequence of
links, in the form of a Link Path Membership (LPM) array.
• SCOR then converts the sequence of links into a sequence of (switchnode, switchport)
tuples, and then into the corresponding OpenFlow rules, which are then installed on
the corresponding SDN switches via OpenFlow FlowMod messages.
This process is repeated for each new flow request.
In the following, we demonstrate the feasibility of SDN-based routing in Wireless Mesh Net-
works (WMNs) using our SCOR framework. This approach is very different from the tradi-
tional approach to routing in WMNs. Firstly, the network operating system (SDN controller)
abstracts from the complexity of dealing with a distributed system of forwarding elements,
119
Chapter 7: SDN-based WMN Routing
and provides a centralised view of the network state, e.g. network topology, link capacity,
current traffic, etc. This allows the formulation of the routing problem as a global optimisation
problem.
Secondly, SDN-based routing provides a much finer level of granularity via its flow-based
routing. In traditional WMNs, traffic is forwarded hop-by-hop simply based on the IP desti-
nation address of packets. This is very coarse grained, and does not allow, for example, to
load balance different flows to the same destination across multiple paths.
Thirdly, and most importantly, we can greatly simplify the process of routing in WMNs via
the power of abstraction, which is one of the key benefits provided by SDN. By providing
high level abstractions for routing via our new northbound interface (SCOR), we can greatly
simplify the problem of routing. We separate the expression of the high level policy or re-
quirement from the problem of finding the actual solution. In the following sections, we will
demonstrate that this approach greatly reduces the complexity of routing, and that it can
achieve significantly improved performance compared to traditional approaches. We will do
this based on two WMN routing use cases, least cost path routing(minimum delay routing)
and maximum residual capacity routing.
7.3 SDN-based WMN Routing Use Case 1: Least Cost Path
Routing
As a first use case of WMN routing using SCOR, we consider least cost path routing [35].
Using different cost metrics allows to use this application for modelling several routing algo-
rithms such as minimum (transmission) delay routing, minimum loss routing [77] and shortest
path routing (using hop count as the metric). Shortest path routing is one of the most widely
used routing algorithms in networking, and is used in protocols such as OSPF [98], RIP and
IGRP [126]. Several algorithms, such as Dijkstra and Bellman-Ford, have been proposed to
efficiently solve this problem in traditional (procedural) programming. Model 1 illustrates the
implementation of least cost path routing in SCOR.
120
Chapter 7: SDN-based WMN Routing
Model 1 Least Cost Path Routing in SCOR% Include item
1 : include ”Predicate_network_path.mzn”;2 : include ”Predicate_path_cost.mzn”;% Parameters
3 : array[int, int] o f int : Links;4 : int : Nlinks = max(index_set_1of2(Links));5 : array[int] o f int : Nodes;6 : array[int] o f int : Flows;7 : int : N f lows = max(index_set(Flows));8 : array[1..N f lows] o f int : s;9 : array[1..N f lows] o f int : t;% Decision Variables
10 : array[1..N f lows] o f var int : Cost;11 : array[1..Nlinks, 1..N f lows] o f var 0..1 : LPM;% Constraint items
12 : constraint network_path(LPM, Links, Nodes, s, t);13 : constraint path_cost(LPM, Links, Cost);% Solve item
14 : solve minimize Cost;
The first two lines are include items, which include the two required SCOR predicates, i.e.
network path and path cost (lines starting with a % are comments). Lines 3-9 declare the
parameters, which are required to model the least cost path problem. Lines 10-11 declare
the two decision variables Cost and LPM. The main body of the program includes the
Constraint items (lines 12-13) and the Solve item (line 14). The first constraint, expressed
via the network path predicate, defines the path from source to destination. The second
constraint, expressed via the path cost predicate, defines the cost associated with each
path, e.g. delay. For our use case, we consider delay as the path metric, which allows us
to implement minimum delay routing, as discussed below. By assigning the link delay as
the link cost in the path cost predicate, the least cost path routing model shown in Model 1,
defines the minimum delay routing problem [86].
In the following, we discuss our experimental evaluation of this approach to minimum delay
routing in WMNs.
121
Chapter 7: SDN-based WMN Routing
7.3.1 Experiment Scenario
For our experiments, we consider the 19 node WMN topology shown in Figure 7.2, which is
based on a real, commercial WMN deployment in the United States [13]. Each WMN node
(indicated as a square) is implemented as an OpenFlow switch (OVS) in Mininet. In order
to send and receive application traffic via each node, we attach a host to each of the 19
switches.
Unfortunately, given the scale of the network, we are not able to use Mininet-ns3-WiFi for our
experiments. Instead, we use a more basic emulation of wireless links via the Linux tc com-
mand, which allows the emulation of different link characteristics such as delay, bandwidth
and packet loss. We acknowledge that the use of tc for wireless link emulation in WMNs
has its limitations, in particular its inability to emulate inter-flow and intra-flow interference.
Despite these limitations, tc has been successfully used for this purpose in other works,
in particular [49], and is adequate to explore the basic concept of SCOR-based routing in
WMNs.
We acknowledge that the use of tc for wireless link emulation in WMNs has its limitations, in
particular its inability to emulate inter-flow and intra-flow interference. Despite these limita-
tions, tc has been successfully used for this purpose in other works, in particular [49], and
is adequate to explore the basic concept of SCOR-based routing in WMNs.
In our scenario, we randomly assigned static link delay values to all the inter-switch links,
chosen from the following set of values: 0ms, 1ms, 6ms, 17ms, 22ms, as shown in Fig-
ure 7.2. Link delay in WMNs, and networks in general, is made up of four different compo-
nents, namely propagation delay, queuing delay, processing delay and transmission delay.
Due to the relatively short links, propagation delay is typically negligible, and other compo-
nents, such as queuing delay are more significant. Our emulated link delay values were
chosen in order to emulate links with different levels of congestion, and hence queuing de-
lay [32]. Since our focus is on the network path involving the WMN nodes (switches), we
assume a delay of 0ms for the host-switch links.
In a real system, the link delays would need to be continuously measured, as is the case for
the network topology as well as the link capacity, as discussed in Chapters 4 and 5 respec-
122
Chapter 7: SDN-based WMN Routing
S1
S2
S3
S4
S5
S6
S7
S8
S9
S13
S15
S18
S25
S26
S27
S29
S30
S31
S32
12ms
17ms
6ms
1ms
1ms
1ms
17ms
0ms
6ms
22ms
0ms
0ms
1ms
22ms
17ms6ms
6ms
0ms
12ms
12ms
17ms
22ms
6ms
1ms
12ms
17ms
22ms 6ms
1ms
6msH1
0ms
H2
H3
H4
H5
H6
H7
H8
H9
H10
H11
H12
H13
H14
H15
H16
H17
H18
H19
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms
0ms0ms
0ms
0ms
0ms22ms
6ms
17ms
22ms
6ms
12ms
0ms
Figure 7.2: Wireless Real Topology with Delay
tively. Since our proposed link capacity estimation method is based on delay measurement,
we get the information of the link delays "for free". This could be achieved using the timing
measurements of when Packet_In and Packet_Out messages are sent and received at the
controller. Due the delay variation of the Controller-Switch control channel, we expect this to
only provide a relatively rough estimate.
In order to measure the end-to-end delay between nodes we use ping utility. For every
source and destination pair in our network, we perform 30 Round Trip Time (RTT) measure-
ments. This allows us to measure the end-to-end delay of every path in the network, and
hence allows us to compare different routing approaches. In particular, we will compare min-
imum delay routing implemented via SCOR with traditional shortest-path routing, e.g. OLSR
(The routing metric used by OLSR is hopcount).
123
Chapter 7: SDN-based WMN Routing
All our experiments were run on a Linux PC with an Intel i7-2600K CPU running at 3.40GHz,
with 8GB of RAM.
7.3.2 Results
Figure 7.3 shows the end-to-end path delay measurements for two nodes, which we chose
as representative examples, i.e. node 3 (Figure 7.3(a)) and node 11 (Figure7.3(b)). The
figure shows the measured RTT from node 3 to every other node, as well as from node 11
to every other node. The different source-destination pairs are shown on the x-axis, while
the y-axis shows the corresponding RTT values. Here, the mean of the 30 measurements is
shown. The graph also shows the corresponding 95% confidence intervals, but since they
are very small, they are barely visible. (In Figure 7.3(a), the maximum confidence interval
is 0.1 ms, and in Figure 7.3(b), it is 0.2 ms.) The RTT values are shown for both traditional
shortest path routing, achieved via OLSR, as well as SDN-based minimum delay routing
implemented via SCOR.
For example, in Figure 7.3(a) node 3 sends ICMP echo request packets to all other nodes
from node 1 to node 19 as shown in x-axis. The columns in the figure show the RTT results
obtained via shortest path and minimum delay routing.
Not surprisingly, we can clearly see the advantage of our minimum delay routing compared to
traditional shortest path routing. The global view of the network provided by SDN combined
with the constraint optimisation approach of SCOR, allows us to find optimal end-to-end
paths with minimum delay. For some paths, such as H3-H11 in Figure 7.3(a), we see a
significant delay reduction of our routing approach compared to shortest path routing. For
this particular case, the RTT is reduced from 81 ms to 40 ms. However, for other cases, e.g.
H3-H9, there is no gain, since the shortest path happens to be the same as the minimum
delay path.
Figure 7.3(b) shows the same results for H11. Similar to Figure 7.3(a), there is no reduction
in delay for some paths, but for most paths, there is a significant improvement.
Figure 7.4 shows a box and whisker plot of the RTT reduction of our minimum delay routing
124
Chapter 7: SDN-based WMN Routing
0
10
20
30
40
50
60
70
80
90
3-->1 3-->2 3-->4 3-->5 3-->6 3-->7 3-->8 3-->9 3-->10 3-->11 3-->12 3-->13 3-->14 3-->15 3-->16 3-->17 3-->18 3-->19
RTT(m
s)
src-dst
SDN-based routing(Min-Delay)Shortest-path(OLSR)
(a) Node 3 as a Sender
0
20
40
60
80
100
120
11-->1 11-->2 11-->3 11-->4 11-->5 11-->6 11-->7 11-->8 11-->9 11-->10 11-->12 11-->13 11-->14 11-->15 11-->16 11-->17 11-->18 11-->19
RTT(m
s)
src-dst
SDN-based routing(Min-Delay)Shortest-path(OLSR)
(b) Node 11 as a Sender
Figure 7.3: Comparison of SDN-based Minimum Delay Routing and Shortest Path Routing
125
Chapter 7: SDN-based WMN Routing
-10
0
10
20
30
40
50
60
70
80
90
100
Host1 Host2 Host3 Host4 Host5 Host6 Host7 Host8 Host9 Host10 Host11 Host12 Host13 Host14 Host15 Host16 Host17 Host18 Host19
ΔR
TT
(ms)
Figure 7.4: Distribution of RTT Reduction of SDN-based Routing and Shortest Path Routing perEach Node
approach versus shortest path routing, for all the 19 hosts in the network. For each host, the
graphs show the median, upper and lower quartile, as well as the maximum and minimum
RTT reduction (∆RTT), across all end-to-end paths originating from this host.
For example, the distribution of the reduction of the RTT values of H3 is shown in the third
column in the figure. The figure indicates that the distribution of the RTT reduction (∆ RTT)
in host H3 achieved by our SDN-based minimum delay routing ranges from 0ms to 55ms.
As mentioned before, the 0ms reduction represents the cases where the shortest path is
also the minimum delay path.
Another way of representing these results is via the Cumulative Distribution Function (CDF)
of the RTT values for all end-to-end paths in the topology, shown in Figure 7.5. Again,
the figure clearly shows the benefit of SDN-based minimum delay routing over traditional
shortest path routing. The global mean of RTT values over all measurements for all end-to-
end paths is 40.1 ms for shortest path routing, and 23.7 ms for SDN-based minimum delay
routing, which represents an almost 41% overall reduction.
This simple example demonstrates the feasibility and simplicity of our SDN-based routing
126
Chapter 7: SDN-based WMN Routing
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 10 20 30 40 50 60 70 80 90 100 110
Cu
mu
lati
ve
Dis
trib
uti
on
Fu
nct
ion
(C
DF
)
Average RTT (ms)
Shortest-path routing(OLSR) SDN-based routing(Min-Delay)
Figure 7.5: Cumulative Distribution Function of Average RTT Values
approach via constraint optimisation using SCOR. In the following section, we consider a
more complex routing problem, and demonstrate that this can be efficiently solved by simply
adding a very small number of lines of SCOR code.
7.4 SDN-based WMN Routing Use Case 2: Maximum Resid-
ual Capacity Routing
The Maximum Residual Capacity [21] belongs to a group of routing problems that aim to
route the maximum number of concurrent data traffic in a given network.
The residual capacity of a link declares the available bandwidth of a network link, which is
the difference between the link capacity and the traffic flowing through that link. Here, the
problem is to route a set of flows between nodes so that the minimum residual capacity
of the network is maximized This is essentially a MAXIMIN optimization problem, where
127
Chapter 7: SDN-based WMN Routing
we try to find a solution with the maximum worst-case link capacity, in terms of minimum
residual capacity. This is a complex routing problem that requires hundreds of lines of code
in procedural programming [21]. In contrast, the implementation in SCOR, as shown in
Model 2, is relatively simple.
In this code N f lows and Nlinks represent the number of nodes and links respectively, and s
and t are the source and destination nodes of the traffic.
Links is a 2-dimensional array, with each row representing a link. LPM (Link Path Mem-
bership) is an array of binary decision variables, that indicate which links belong to a path.
LPM[i] = 1 means that link i belongs to a path, and LPM[i] = 0 means that it does not.
Lines 1 and 2 are include items, which make it possible to use two predicates (lines starting
with a % are comments), the network path and residual capacity predicates. Lines 3-9 declare
the parameters, which are required to model the maximum residual capacity problem and
lines 10-11 declare the two decision variables, Residuals and LPM. The main body of the
program consists of the three constraints (lines 12-14) and the solve item (line 15). The
network path predicate defines the paths for all the source-target pairs, and the residual
capacity predicate applies the capacity constraint on each link. The solve item indicates the
solver should find the values of decision variables that maximises the minimum of Residuals.
The simple definition of a relatively complex routing problem in SCOR, such as maximum
residual capacity routing, shows the power of abstraction that SDN enables, if combined with
the right northbound interface. While we focus on two uses cases in this chapter, it is easy
to see that the modularity and abstraction of SCOR make it relatively easy to define a wide
range of routing problems, both for WMNs as well as for networks more generally.
In the following, we discuss our experiments, which aim to demonstrate the feasibility and
performance of this approach.
128
Chapter 7: SDN-based WMN Routing
Model 2 Maximum Residual Capacity Routing in SCOR% Include item
1 : include “Predicate network path.mzn”;2 : include “Predicate residual capacity.mzn”;% Parameters
3 : array[int, int] o f int : Links;4 : int : Nlinks = max(index_set_1of2(Links));5 : array[int] o f int : Nodes;6 : array[int] o f int : Flows;7 : int : N f lows = max(index_set(Flows));8 : array[1..N f lows] o f int : s;9 : array[1..N f lows] o f int : t;% Decision Variables
10 : array[1..Nlinks, 1..N f lows] o f var 0..1 : LPM;11 : array[1..Nlinks] o f var int : Residuals;% Constraint items
12 : constraint network_path(LPM, Links, Nodes, s, t);13 : constraint residual_capacity(LPM, Links, Flows, Residuals);14 : constraint forall(i in 1..Nlinks)(Residuals[i] >= 0);% Solve item
15 : solve maximize min(Residuals);
7.4.1 Experiment Scenario
In this experiment, we aimed to compare our SCOR-based maximum residual capacity rout-
ing approach with traditional (shortest path) WMN routing protocols such as OLSR. The
metric for our comparison was the maximum aggregate throughput that could be achieved.
For our experiments, we considered three different network topologies: a mesh, a cube and
the topology of a real WMN deployment, as already used in our previous experiments (Fig-
ure 7.2). These different topologies provided with different scenarios, in particular different
numbers of link disjoint paths between node pairs.
While the main goal in maximum residual capacity is to gain the maximum achievable
throughput between a source and destination, it is important to note that finding maximum
residual capacity paths is not necessarily equivalent to finding non-disjoint paths. Figure 7.6
shows an example scenario to illustrate this. Here, we assume full-duplex links, which are
possible in multi-radio WMNs, and 3 concurrent flows from node 1 to node 7. Figure 7.6(a)
shows a non-disjoint path, and Figure 7.6(b) shows a node-disjoint path. Both represent a
129
Chapter 7: SDN-based WMN Routing
S3
S6
S7S8
S2S1
S4
S5 1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s1 Mb/s
1 Mb/s
S3
S6
S7S8
S2S1
S4
S5 1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s
1 Mb/s1 Mb/s
1 Mb/s
(a) (b)The path for Flow#1
The path for Flow#2
The path for Flow#3
Figure 7.6: Figure 3: Paths found for three concurrent flows from node 1 to node 7 using a)non-disjoint, and b) disjoint
valid maximum residual capacity path solution.
As in our previous experiments, we used Mininet and the Linux tc tool to provide basic
wireless link emulation. In particular, we set the bandwidth of switch-switch links to 5.4
Mbps, which corresponds to the maximum achievable throughput for a 6 Mbps WiFi OFDM
link. We further assumed host-switch links are wired, with a link capacity of 1Gbps.
The goal of our experiment was to measure the maximum achievable aggregate throughput
between a source and destination node. Since SDN provides fine grained control over flow
routing and forwarding, our maximum residual capacity routing solution will be able to make
optimal use of all the available paths. This is in contrast to traditional routing protocols such
as OLSR, where all traffic to the same IP destination address is routed along the same
path. (This refers to a particular point in time, where routing table entries are assumed to be
constant.)
In this experiment, we used iperf to measure the throughput between source destination
node pairs.
130
Chapter 7: SDN-based WMN Routing
S10 S12S11S9
S6 S8S7S5
S2 S4S3S1
S14 S16S15S13
H1
H2
Figure 7.7: Wireless Mesh Topology
7.4.2 Results
Mesh Topology
For our first experiment, we considered a mesh topology with 16 switches and two hosts as
shown in Figure 7.7. This figure also shows that switch S1 is connected to host H1, and
switch S16 is connected to host H2. As can be seen in the figure, there are only two link
disjoint paths between nodes H1 and H2, for example:
• H1-S1-S5-S6-S10-S11-S15-S16-H2
• H1-S1-S2-S3-S4-S8-S12-S16-H2
Figure 7.8 shows the result of our iperf experiments, which measured the achievable through-
put between node H1 and H2. We used multiple parallel iperf (TCP) sessions (or flows) in
order to measure the concurrent throughput across multiple paths. The graph shows the
average over 10 experiment runs, as well as the 95% confidence interval.
The x-axis shows the number of concurrent iperf flows (offered load), and the y-axis shows
131
Chapter 7: SDN-based WMN Routing
0
2
4
6
8
10
12
1 2 3 4 5 6
Ag
gre
gat
e T
hro
ug
hp
ut
(Mb
ps)
#Flows
Shortest-path(OLSR) SDN-based routing(Max-Res)
Figure 7.8: Aggregate Throughput vs. Number of Flows- Wireless Mesh Topology
the achieved aggregate throughput in Mbps.
The graph shows the comparison between our SDN-based WMN routing (maximum residual
capacity) and a traditional shortest path routing, such as implemented in OLSR using the hop
count metric.
For a single iperf flow, both routing approaches achieve the same end-to-end throughput of
around 5.4 Mbps, which corresponds to the actual capacity of a 6 Mbps OFDM WiFi link.
However, if we run two parallel iperf flows, our maximum residual capacity routing made
use of the second link disjoint path, and load balance the traffic to achieve an aggregate
throughput of 10.5, which is roughly double the previous single-flow throughput. In contrast,
traditional shortest path routing only achieves the same throughput as in single-flow case,
since it is unable to perform fine-grained flow-based routing as is possible in SDN, and
cannot load balance traffic across the two available link-disjoint paths.
When we further increased the number of iperf flows, i.e. to 2, 3, etc., maximum residual
capacity routing could not achieve any further improvement in aggregate throughput. This
was not due to the limitation of the routing method, but due to the limitation of the network
132
Chapter 7: SDN-based WMN Routing
S3
S6
S7S8
H1
H2
S2S1
S4
S5
Figure 7.9: Wireless Cube Topology
topology, i.e. that fact that there are only 2 link disjoint paths. In the following, we consider
different network topologies with a higher number of link disjoint paths, and hence a greater
potential for throughput increase.
Cube Topology
In a second experiment, we considered the cube topology shown in Figure 7.9, with 8 WMN
nodes (SDN switches) and two hosts, H1 and H2, attached to switches S1 and S7 respec-
tively. A key difference here is that we now have 3 link disjoint paths between H1 and H2,
for example:
• H1-S1-S5-S6-S7-H2
• H1-S1-S2-S3-S7-H2
• H1-S1-S4-S8-S7-H2
We conducted the same experiment as for mesh topology. Figure 7.10 shows the result of
the shortest path and SDN-based routing.
133
Chapter 7: SDN-based WMN Routing
0
2
4
6
8
10
12
14
16
1 2 3 4 5 6
Ag
gre
gat
e T
hro
ug
hp
ut
(Mb
ps)
#Flows
Shortest-path(OLSR) SDN-based routing(Max-Res)
Figure 7.10: Aggregate Throughput vs. Number of Flows- Wireless Cube Topology
As before, the x-axis represents the number of the flows and y-axis shows the corresponding
aggregate throughput in Mbps.
We can see that our SDN-based routing can achieve an approximately linear increase in
throughput with an increasing number of flows, up to a value of 3, which is the number of link
disjoint paths in the topology. The maximum throughput of almost 16 Mbps is achieved for
3 flows, which is close to the theoretical maximum of 3 times the capacity of a WiFi OFDM
link with a 6 Mpbs rate. We notice a slight decrease in aggregate throughput as we further
increase the number of iperf flows, which we send across the network. We believe this might
be due to the interaction of multiple flows routed along the same path, e.g. TCP congestion
control. As in the previous experiment, the throughput of traditional shortest path routing is
limited to the single-path throughput, as expected.
Realistic WMN Topology
For our final experiment, we consider the topology of a real deployed commercial Wireless
Mesh Network, i.e. the same topology that we considered in Section 7.3.1. For the benefit
134
Chapter 7: SDN-based WMN Routing
S1
S2
S3
S4
S5
S6
S7
S8
S9
S13
S15
S18
S25
S26
S27
S29
S30
S31
S32
H1
H2
Figure 7.11: Wireless Real Topology
of the reader, we show this topology again in Figure 7.11, but this time without link delay
information and only showing the two hosts used in the experiment.
A key feature of the topology is that there are 5 link disjoint paths, as shown below:
• H1-S1-S3-S9-H2
• H1-S1-S13-S6-S9-H2
• H1-S1-S4-S2-S15-S29-S9-H2
• H1-S1-S27-S18-S7-S9-H2
• H1-S1-S8-S25-S26-S31-S30-S9-H2
135
Chapter 7: SDN-based WMN Routing
0
5
10
15
20
25
30
1 2 3 4 5 6 7 8
Ag
gre
gat
e T
hro
ug
hp
ut
(Mb
ps)
#Flows
Shortest-path(OLSR) SDN-based routing(Max-Res)
Figure 7.12: Aggregate Throughput vs. Number of Flows- Wireless Real Topology
Figure 7.12 shows the results of our experiment for this topology. As before, we can see
a linear increase in the aggregate throughput achieved by our SDN-based routing, if we
increase the number of offered iperf flows. The maximum throughput is reached once the
number of flows equals the number of link-disjoint paths that are available between the
source and destination node. In this scenario, SDN-based routing achieves a maximum of
over 25 Mbps in total throughput between nodes H1 and H2. This is close to the maximum
theoretical throughput 27 Mbps (5 times the 6 Mbps OFDM capacity of 5.4 Mbps). This
represents a roughly 400% increase over traditional routing. For larger and denser networks,
with a greater number of available link-disjoint paths, we can expect this gain to increase
further.
Our two use cases show the practical feasibility of SDN-based routing using SCOR. The key
feature of SCOR lies in the simplicity in which relatively complex routing problems can be
expressed and solved.
136
Chapter 7: SDN-based WMN Routing
7.5 Conclusions
In this chapter, we have presented a novel SDN-based routing framework for WMNs. The
key contribution of our approach is that we leverage a higher level of abstraction (via the
SCOR northbound interface) to express relatively complex WMN routing problems in a very
compact and simple form, and we hide the complexity of finding a solution by using pow-
erful general constraint programming solvers. Abstraction is a powerful tool to enable and
facilitate innovation and increased programmability of the network, and by applying the SDN
paradigm to WMNs, we can leverage those benefits. Our routing framework uses the build-
ing blocks of topology discovery and link capacity estimation, discussed earlier in this thesis.
We have demonstrated the feasibility and simplicity of our SDN-based routing approach for
WMNs via two case studies, minimum delay (least cost) routing and maximum residual ca-
pacity routing. Both examples show the benefit of having a global network view as well as
treating routing as constraint optimisation problem, using the simple SCOR interface. While
we believe this work demonstrates the potential of SDN-based routing in WMNs, more fu-
ture work is required. For example, it would be interesting to explore the addition of an
interference predicate to SCOR, which can model interference between wireless links in the
network. A number of interference models from the literature can be considered for this as
a starting point. The high degree of modularity of SCOR can accommodate such new mod-
ules/predicates. Furthermore, a wider range or routing use cases can be considered and
evaluated, for different network scenarios. Our goal is to explore this in future work.
137
Chapter 8
Conclusion
This thesis investigates the adoption of the SDN paradigm for routing in Wireless Mesh
Networks. While the benefits of SDN have been clearly demonstrated for wired networks
such as wide area and data centre networks, this has not been fully explored in the context
of WMNs. Our aim was to leverage the key features of SDN, such as a global view of
the network state, fine grained flow-based routing, and most importantly, abstraction. Our
novel SDN-based routing framework for WMNs demonstrates that this approach is not only
feasible, but also simple and efficient.
As key building blocks for this framework, we have addressed the problems of efficient topol-
ogy discovery and link capacity estimation. Our improved approach to SDN topology discov-
ery achieves an up to 80% reduced overhead compared to OFDP, the current state-of-the-art
approach implemented by most SDN controller platforms. While this benefit can be also ap-
plied to wired SDNs, it is particularly critical for wireless networks.
Another building block, which is critical for wireless SDNs, is our new link capacity estima-
tion approach. Here, we have adapted the idea of packet pair/train probing to wireless SDN.
Using SDN-specific features, we have also developed a new method that overcomes es-
timation inaccuracies due to cross traffic, which is a well-known shortcoming of traditional
packet pair/train probing approaches. We have demonstrated the feasibility and accuracy of
our approach, which is fully compatible with any OpenFlow-based switches and controllers.
Experimental evaluation is an important methodology for research in Wireless Mesh Net-
138
Chapter 8: Conclusion
works. Our evaluation of two new testbeds for their potential use as an experimental plat-
form for SDN-based WMN research represents an important contribution. We have identified
the potential and shortcomings of Mininet-ns3-WiFi, a new hybrid testbed. In addition, we
have provided a first experimental evaluation of the recently launched R2Lab testbed (INRIA,
France) for the specific use for WMN experiments.
Our new SDN-based routing framework provides a novel approach to routing and load bal-
ancing of traffic in WMNs. A key benefit is the improved performance, due to the global
view of the network, and the ability to efficiently compute routes as a solution to a constraint
optimisation problem. We have demonstrated the simplicity with which complex routing prob-
lems can be formulated and solved, using the power of abstraction provided by SCOR, our
new constrained-based programming northbound interface. This is a main contribution of
our work, and is in contrast to key related works which also considered the use of SDN
concepts in WMNs.
While we have demonstrated the basic building blocks and the feasibility of our approach,
this work is by no means complete. A critical component that is missing is the consideration
and modelling of interference, which is a key factor in wireless networks. This is to be
addressed in future work. Given the modularity and extensibility of SCOR, it is possible
to define new primitives (SCOR predicates) which model wireless interference. For this,
different interference models, e.g. hop-based or SINR-based, can be considered.
We believe our work represents significant steps towards the goal of SDN-based Wireless
Mesh Networks, and we hope it can build the basis for future work to further explore and
evaluate this promising concept.
139
Bibliography
[1] GENI Wiki. http://groups.geni.net/geni/wiki/OpenFlowDiscoveryProtocol.
[2] Iperf. https://iperf.fr.
[3] Multi-threaded simulation implementation for multicore. https://www.nsnam.org/wiki
/Current_Development.
[4] ns-3 Emulation Overview. https://www.nsnam.org/docs/release/3.17/models/ single-
html/index.html.
[5] OFELIA - OpenFlow in Europe - Linking Infrastructure and Applications.
http://www.fp7-ofelia.eu/assets/ IslandsinventoryPhaseIOpenCall.pdf.
[6] OFELIA Tutorial. http://www.fp7-ofelia.eu/assets/Uploads/ OFELIA-Tutorial.pdf.
[7] Open Flow Standard. https://www.opennetworking.org/sdn-resources/openflow.
[8] Open Flow Standard version 1.5. https://www.opennetworking.org/images/stories/
downloads/sdn-resources/onf-specifications/openflow/openflow-switch-
v1.5.0.noipr.pdf.
[9] Open Networking Foundation. https://www.opennetworking.org.
[10] Open vSwitch. http://openvswitch.org.
[11] OpenDaylight. http://www.opendaylight.org/project/technical-overview.
[12] psutil. https://github.com/giampaolo/psutil.
[13] Real WMN deployment in the United State. Firetide, Private Communication.
140
Bibliography
[14] IEEE Standard for Local and Metropolitan Area Networks - Station and Media Access
Control Connectivity Discovery, IEEE Std 802.1AB, 2009.
[15] M. Abolhasan, B. Hagelstein, and J.-P. Wang. Real-world performance of current
proactive multi-hop mesh protocols. In Communications, 2009. APCC 2009. 15th
Asia-Pacific Conference on, pages 44–47. IEEE, 2009.
[16] M. Abolhasan, T. Wysocki, and E. Dutkiewicz. A review of routing protocols for mobile
ad hoc networks. Ad hoc networks, 2(1):1–22, 2004.
[17] R. Ahlswede, N. Cai, S.-Y. Li, and R. W. Yeung. Network information flow. IEEE
Transactions on information theory, 46(4):1204–1216, 2000.
[18] I. F. Akyildiz, P. Wang, and S.-C. Lin. Softair: A software defined networking architec-
ture for 5g wireless systems. Computer Networks, 85:1–18, 2015.
[19] I. F. Akyildiz, X. Wang, and W. Wang. Wireless mesh networks: a survey. Computer
Networks, 47(4):445–487, 2005.
[20] A. Al-Shabibi, M. De Leenheer, M. Gerola, A. Koshibe, W. Snow, and G. Parulkar.
Openvirtex: A network hypervisor. Open Networking Summit, 2014.
[21] K. alkowiak. Maximizing residual capacity in connection-oriented networks. Journal of
Applied Mathematics and Decision Sciences, 2006:18, 2006.
[22] E. Alotaibi and B. Mukherjee. Survey paper: A survey on routing algorithms for wire-
less ad-hoc and mesh networks. Computer Networks, 56(2):940–965, Feb. 2012.
[23] A. Amokrane, R. Langar, R. Boutabayz, and G. Pujolle. Online flow-based energy effi-
cient management in wireless mesh networks. In 2013 IEEE Global Communications
Conference (GLOBECOM), pages 329–335. IEEE, 2013.
[24] B. N. Astuto, M. Mendonça, X. N. Nguyen, K. Obraczka, and T. Turletti. A survey
of software-defined networking: Past, present, and future of programmable networks.
Communications Surveys and Tutorials, IEEE Communications Society, 16(3):1617 –
1634, 2014.
[25] M. Bansal, J. Mehlman, S. Katti, and P. Levis. Openradio: A programmable wireless
dataplane. In Proceedings of the First Workshop on Hot Topics in Software Defined
Networks, HotSDN ’12, pages 109–114. ACM, 2012.
141
Bibliography
[26] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt,
and A. Warfield. Xen and the art of virtualization. In ACM SIGOPS Operating Systems
Review, volume 37, pages 164–177. ACM, 2003.
[27] R. Bartak. Constraint programming: In pursuit of the holy grail. In In Proceedings
of the Week of Doctoral Students (WDS99 -invited lecture), volume Part IV, pages
555–564, Prague, Poland, 1999. MatFyzPress.
[28] S. M. Bellovin. A best-case network performance model. 1992.
[29] C. J. Bernardos, A. De La Oliva, P. Serrano, A. Banchs, L. M. Contreras, H. Jin, and
J. C. Zúñiga. An architecture for software defined wireless networking. IEEE Wireless
Communications, 21(3):52–61, 2014.
[30] D. P. Bertsekas. Network optimization: continuous and discrete models. Athena Sci-
entific, Belmont Massachusetts, USA, 1998.
[31] F. Bokhari and G. Záruba. Partially overlapping channel assignments in wireless mesh
networks. 2012.
[32] J.-C. Bolot. Characterizing end-to-end packet delay and loss in the internet. J. High
Speed Networks, 2(3):305–323, 1993.
[33] M. E. M. Campista, P. M. Esposito, I. M. Moraes, L. H. M. Costa, O. C. M. Duarte, D. G.
Passos, C. V. N. De Albuquerque, D. C. M. Saade, and M. G. Rubinstein. Routing
metrics and protocols for wireless mesh networks. IEEE network, 22(1), 2008.
[34] M. C. Chan, C. Chen, J. X. Huang, T. Kuo, L. H. Yen, and C. C. Tseng. Opennet:
A simulator for software-defined wireless local area network. In 2014 IEEE Wireless
Communications and Networking Conference (WCNC), pages 3332–3336, April 2014.
[35] S. Chen and K. Nahrstedt. An overview of quality-of-service routing for next-generation
high-speed networks: problems and solutions. Network, IEEE, 12(6):64–79, 1998.
[36] T. Clausen and P. Jacquet. Optimized link state routing protocol (olsr). Technical
report, 2003. https://tools.ietf.org/html/rfc3626.
[37] collaborative project within the European Commission’s FP7 ICT Work Programme.
OFELIA testbed. http://www.fp7-ofelia.eu/.
142
Bibliography
[38] S. Costanzo, L. Galluccio, G. Morabito, and S. Palazzo. Software Defined Wireless
Networks: Unbridling SDNs. In Software Defined Networking (EWSDN), 2012 Euro-
pean Workshop on, pages 1–6. IEEE, Oct. 2012.
[39] D. S. De Couto, D. Aguayo, J. Bicket, and R. Morris. A high-throughput path metric
for multi-hop wireless routing. Wireless Networks, 11(4):419–434, 2005.
[40] P. Dely. Architectures and Algorithms for Future Wireless Local Area Networks. PhD
thesis, Karlstad University, 2013.
[41] A. Detti, C. Pisa, S. Salsano, and N. Blefari-Melazzi. Wireless mesh software defined
networks (wmsdn). In 9th IEEE International Conference on Wireless and Mobile
Computing, Networking and Communications, WiMob 2013, Lyon, France, October
7-9, 2013, pages 89–95, 2013.
[42] C. Dovrolis, P. Ramanathan, and D. Moore. Packet-dispersion techniques and a
capacity-estimation methodology. IEEE/ACM Trans. Netw., 12(6):963–977, Dec.
2004.
[43] R. Draves, J. Padhye, and B. Zill. Comparison of routing metrics for static multi-hop
wireless networks. In ACM SIGCOMM Computer Communication Review, volume 34,
pages 133–144. ACM, 2004.
[44] R. Draves, J. Padhye, and B. Zill. Routing in multi-radio, multi-hop wireless mesh net-
works. In Proceedings of the 10th Annual International Conference on Mobile Com-
puting and Networking, MobiCom ’04, pages 114–128, New York, NY, USA, 2004.
ACM.
[45] D. Erickson. Floodlight SDN Controller. http://www.projectfloodlight.org/floodlight/.
[46] D. Erickson. The beacon openflow controller. In Proceedings of the second ACM SIG-
COMM workshop on Hot topics in software defined networking, pages 13–18. ACM,
2013.
[47] ETTUS. USRP. https://www.ettus.com/.
[48] N. Feamster, J. Rexford, and E. Zegura. The road to sdn. Queue, 11(12):20, 2013.
[49] R. Fontes, S. Afzal, S. Brito, M. Santos, and C. Rothenberg. Mininet-wifi: Emulating
software-defined wireless networks. In Network and Service Management (CNSM),
2015 11th International Conference on, pages 384–389, Nov 2015.
143
Bibliography
[50] L. Galluccio, S. Milardo, G. Morabito, and S. Palazzo. Sdn-wise: Design, prototyping
and experimentation of a stateful sdn solution for wireless sensor networks. In 2015
IEEE Conference on Computer Communications (INFOCOM), pages 513–521. IEEE,
2015.
[51] A. D. Gante, M. Aslan, and A. Matrawy. Smart wireless sensor network management
based on software-defined networking. In 2014 27th Biennial Symposium on Commu-
nications (QBSC), pages 71–75, June 2014.
[52] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker.
Nox: Towards an operating system for networks. SIGCOMM computer communication
review, 38(3):105–110, July 2008.
[53] A. Gudipati, D. Perry, L. E. Li, and S. Katti. Softran: Software defined radio access
network. In Proceedings of the Second ACM SIGCOMM Workshop on Hot Topics in
Software Defined Networking, HotSDN ’13, pages 25–30, New York, NY, USA, 2013.
ACM.
[54] J. Guerin, M. Portmann, K. Bialkowski, W. L. Tan, and S. Glass. Low-cost wireless
link capacity estimation. In Wireless Pervasive Computing (ISWPC), 2010 5th IEEE
International Symposium on, pages 343–348. IEEE, 2010.
[55] J. Guerin, M. Portmann, and A. Pirzada. Routing metrics for multi-radio wireless
mesh networks. In Telecommunication Networks and Applications Conference, 2007.
ATNAC 2007. Australasian, pages 343–348. IEEE, 2007.
[56] D. Gupta, K. V. Vishwanath, M. McNett, A. Vahdat, K. Yocum, A. Snoeren, and G. M.
Voelker. Diecast: Testing distributed systems with an accurate scale model. ACM
Transactions on Computer Systems (TOCS), 29(2):4, 2011.
[57] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown. Reproducible net-
work experiments using container-based emulation. In Proceedings of the 8th inter-
national conference on Emerging networking experiments and technologies, CoNEXT
’12, pages 253–264, New York, NY, USA, 2012. ACM.
[58] I. T. Haque and N. Abu-Ghazaleh. Wireless software defined networking: A survey
and taxonomy. IEEE Communications Surveys & Tutorials, 18(4):2713–2737.
[59] C. L. Hedrick. Routing information protocol. 1988.
144
Bibliography
[60] B. Heller. Reproducible Network Research with High-fidelity Emulation. PhD thesis,
Stanford University, 2013.
[61] M. Hibler, R. Ricci, L. Stoller, J. Duerig, S. Guruprasad, T. Stack, K. Webb, and J. Lep-
reau. Large-scale virtualization in the emulab network testbed.
[62] Y. T. Hou, Y. Shi, and H. D. Sherali. Optimal spectrum sharing for multi-hop software
defined radio networks. In IEEE INFOCOM 2007-26th IEEE International Conference
on Computer Communications, pages 1–9. IEEE, 2007.
[63] F. Hu, Q. Hao, and K. Bao. A survey on software-defined network and openflow: From
concept to implementation. IEEE Communications Surveys and Tutorials, 16(4):2181–
2206, 2014.
[64] N. Hu, S. Member, P. Steenkiste, and S. Member. Evaluation and characterization of
available bandwidth probing techniques. IEEE Journal on Selected Areas in Commu-
nications, 21:879–894, 2003.
[65] H. Huang, P. Li, S. Guo, and W. Zhuang. Software-defined wireless mesh networks:
architecture and traffic orchestration. IEEE Network, 29(4):24–30, July 2015.
[66] INRIA. inaugural meeting R2Lab testbed. https://www.inria.fr/en/centre/sophia/calendar/
r2lab-anechoic-chamber-a-heterogeneous-wireless-testbed.
[67] INRIA. R2Lab testbed. https://r2lab.inria.fr/.
[68] A. Iwata, C.-C. Chiang, G. Pei, M. Gerla, and T.-W. Chen. Scalable routing strategies
for ad hoc wireless networks. IEEE journal on selected areas in communications,
17(8):1369–1379, 1999.
[69] V. Jacobson. Congestion avoidance and control. In ACM SIGCOMM computer com-
munication review, volume 18, pages 314–329. ACM, 1988.
[70] V. Jacobson. Pathchar: A tool to infer characteristics of internet paths, 1997.
[71] M. Jain and C. Dovrolis. End-to-end available bandwidth: Measurement methodology,
dynamics, and relation with TCP throughput, volume 32. ACM, 2002.
[72] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer,
J. Zhou, M. Zhu, J. Zolla, U. Hölzle, S. Stuart, and A. Vahdat. B4: Experience with a
145
Bibliography
globally-deployed software defined wan. SIGCOMM computer communication review,
43(4):3–14, Aug. 2013.
[73] X. Jin, L. E. Li, L. Vanbever, and J. Rexford. Softcell: Scalable and flexible cellular
core network architecture. In Proceedings of the ninth ACM conference on Emerging
networking experiments and technologies, pages 163–174. ACM, 2013.
[74] X. Jin, L. E. Li, L. Vanbever, and J. Rexford. Softcell: Taking control of cellular core
networks. arXiv preprint arXiv:1305.3568, 2013.
[75] M. Joa-Ng and I.-T. Lu. A peer-to-peer zone-based two-level link state routing for mo-
bile ad hoc networks. IEEE Journal on selected areas in communications, 17(8):1415–
1425, 1999.
[76] D. B. Johnson and D. A. Maltz. Dynamic source routing in ad hoc wireless networks.
In Mobile computing, pages 153–181. Springer, 1996.
[77] W. A. M. Jr., E. Aguiar, A. Abelem, and M. Stanton. Using multiple metrics with the op-
timized link state routing protocol for wireless mesh networks. SimpËEUsio Brasileiro
de Redes de Computadores e Sistemas DistribuIdos, 2008.
[78] P. Jurkiewicz. Link modeling using ns 3, 2013. https://github.com/mininet/mininet/
wiki/Link-modeling-using-ns-3.
[79] H. Kim and N. Feamster. Improving network management with software defined net-
working. communications magazine. IEEE, 51(2):114–119, 2013.
[80] Y.-H. Kim, A. Quereilhac, M. A. Larabi, J. Tribino, T. Parmentelat, T. Turletti, and
W. Dabbous. Enabling iterative development and reproducible evaluation of network
protocols. Computer Networks, 63:238–250, 2014.
[81] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu, R. Ramanathan,
Y. Iwata, H. Inoue, T. Hama, et al. Onix: A distributed control platform for large-scale
production networks. In OSDI, volume 10, pages 1–6, 2010.
[82] D. Kreutz, F. M. V. Ramos, P. Veríssimo, C. E. Rothenberg, S. Azodolmolky, and S. Uh-
lig. Software-defined networking: A comprehensive survey. CoRR, abs/1406.0440,
2014.
146
Bibliography
[83] M. Labraoui, M. M. Boc, and A. Fladenmuller. Software defined networking-assisted
routing in wireless mesh networks. In 2016 International Wireless Communications
and Mobile Computing Conference (IWCMC), pages 377–382, Sept 2016.
[84] M. Labraoui, C. Chatzinakis, M. M. Boc, and A. Fladenmuller. On addressing mobility
issues in wireless mesh networks using software-defined networking. In 2016 Eighth
International Conference on Ubiquitous and Future Networks (ICUFN), pages 903–
908, July 2016.
[85] B. Lantz, B. Heller, and N. McKeown. A network in a laptop: rapid prototyping for
software-defined networks. In Proceedings of the 9th ACM SIGCOMM Workshop on
Hot Topics in Networks, page 19. ACM, 2010.
[86] S. Layeghy, F. Pakzad, and M. Portmann. Scor: Software-defined constrained optimal
routing platform for sdn. arXiv preprint arXiv:1607.03243, 2016.
[87] W. J. Lee, J. W. Shin, H. Y. Lee, and M. Y. Chung. Testbed implementation for rout-
ing wlan traffic in software defined wireless mesh network. In 2016 Eighth Interna-
tional Conference on Ubiquitous and Future Networks (ICUFN), pages 1052–1055,
July 2016.
[88] C. E. Leiserson. Fat-trees: universal networks for hardware-efficient supercomputing.
Computers, IEEE Transactions on, 100(10):892–901, 1985.
[89] L. E. Li, Z. M. Mao, and J. Rexford. Toward software-defined cellular networks. In Pro-
ceedings of the 2012 European Workshop on Software Defined Networking, EWSDN
’12, pages 7–12, Washington, DC, USA, 2012. IEEE Computer Society.
[90] B. Liskov. The power of abstraction. In N. Lynch and A. Shvartsman, editors, Dis-
tributed Computing, volume 6343 of Lecture Notes in Computer Science, pages 3–3.
Springer Berlin Heidelberg, 2010.
[91] T. Luo, H.-P. Tan, and T. Q. Quek. Sensor openflow: Enabling software-defined wire-
less sensor networks. IEEE Communications Letters, 16(11):1896–1899, 2012.
[92] K. Marriott, P. J. Stuckey, L. D. Koninck, and H. Samulowitz. A minizinc tutorial, 2014.
[93] B. Mayoh, E. Tyugu, and J. Penjam. Constraint programming, volume 131. Springer
Science & Business Media, 2013.
147
Bibliography
[94] M. McCauley. POX SDN Controller. https://github.com/noxrepo/pox.
[95] N. McKeown. Software-defined networking. INFOCOM keynote talk (2009), 2009.
[96] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford,
S. Shenker, and J. Turner. Openflow: enabling innovation in campus networks. ACM
SIGCOMM Computer Communication Review, 38(2):69–74, 2008.
[97] B. Melander, M. Björkman, and P. Gunningberg. A new end-to-end probing and analy-
sis method for estimating bandwidth bottlenecks. In Global Telecommunications Con-
ference, 2000. GLOBECOM’00. IEEE, volume 1, pages 415–420. IEEE, 2000.
[98] J. Moy. Ospf version 2. 1997.
[99] S. Murthy and J. J. Garcia-Luna-Aceves. An efficient routing protocol for wireless
networks. Mobile Networks and applications, 1(2):183–197, 1996.
[100] J. Naous, D. Erickson, G. A. Covington, G. Appenzeller, and N. McKeown. Implement-
ing an openflow switch on the netfpga platform. In Proceedings of the 4th ACM/IEEE
Symposium on Architectures for Networking and Communications Systems, pages
1–9. ACM, 2008.
[101] V. Nascimento, M. Moraes, R. Gomes, B. Pinheiro, A. Abelem, V. C. M. Borges, K. V.
Cardoso, and E. Cerqueira. Filling the gap between software defined networking and
wireless mesh networks. In 10th International Conference on Network and Service
Management, pages 451 – 454. IEEE, 2014.
[102] N. Nethercote, K. Marriott, R. Rafeh, M. Wallace, and M. G. de la Banda. Specification
of minizinc, 2014.
[103] N. Nethercote, P. J. Stuckey, R. Becket, S. Brand, G. J. Duck, and G. Tack. MiniZinc:
Towards a Standard CP Modelling Language, volume 4741 of Lecture Notes in Com-
puter Science, book section 38, pages 529–543. Springer Berlin Heidelberg, 2007.
[104] A. Neumann, C. Aichele, M. Lindner, and S. Wunderlich. Better Approach To Mo-
bile Ad-hoc Networking (B.A.T.M.A.N.) draft-wunderlich-openmesh-manet-routing-00,
2008. https://tools.ietf.org/html/draft-wunderlich-openmesh-manet-routing-00.
[105] N. Nikaein, H. Labiod, and C. Bonnet. Ddr: distributed dynamic routing algorithm for
mobile ad hoc networks. In Proceedings of the 1st ACM international symposium on
Mobile ad hoc networking & computing, pages 19–27. IEEE Press, 2000.
148
Bibliography
[106] NS-3Consortium. ns-3. https://www.nsnam.org/.
[107] S. D. Odabasi and A. H. Zaim. A survey on wireless mesh networks, routing met-
rics and protocols. International Journal of Electronics, Mechanical and Mechatronics
Engineering (IJEMME), 2(1):92–104, 2010.
[108] OLSR. Optimized Link State Routing Protocol (OLSR). http://www.olsr.org/ mediawik-
i/index.php/Olsrd_releases.
[109] ONS. SDN: Transforming Networking to Accelerate Business Agility, 2014.
http://www.opennetsummit.org/archives/mar14/site/why-sdn.htm.
[110] ORBIT. ORBIT. http://www.orbit-lab.org/.
[111] F. Pakzad. OFDPv2-A. https://github.com/Farzaneh1363/RouteFlow/blob/discovery-
version-2A/pox/pox/openflow/discovery.py.
[112] F. Pakzad. OFDPv2-B. https://github.com/Farzaneh1363/RouteFlow/blob/discovery-
version-2B/pox/pox/openflow/discovery.py.
[113] F. Pakzad, M. Portmann, W. L. Tan, and J. Indulska. Efficient topology discovery in
software defined networks. In 8th International Conference on Signal Processing and
Communication Systems. IEEE, 2014.
[114] F. Pakzad, M. Portmann, W. L. Tan, and J. Indulska. Efficient topology discovery in
software defined networks. In Signal Processing and Communication Systems (IC-
SPCS), 2014 8th International Conference on, pages 1–8. IEEE, 2014.
[115] V. D. Park and M. S. Corson. A highly adaptive distributed routing algorithm for mo-
bile wireless networks. In INFOCOM’97. Sixteenth Annual Joint Conference of the
IEEE Computer and Communications Societies. Driving the Information Revolution.,
Proceedings IEEE, volume 3, pages 1405–1413. IEEE, 1997.
[116] P. Patil, A. Hakiri, Y. Barve, and A. Gokhale. Enabling software-defined networking for
wireless mesh networks in smart environments. In 2016 IEEE 15th International Sym-
posium on Network Computing and Applications (NCA), pages 153–157, Oct 2016.
[117] C. Perkins, E. Royer, and S. Das. RFC 3561 Ad hoc On-Demand Distance Vector
(AODV) Routing. Technical report, 2003.
149
Bibliography
[118] C. E. Perkins and P. Bhagwat. Highly dynamic destination-sequenced distance-vector
routing (dsdv) for mobile computers. In ACM SIGCOMM computer communication
review, volume 24, pages 234–244. ACM, 1994.
[119] R. Prasad, C. Dovrolis, M. Murray, and K. Claffy. Bandwidth estimation: metrics,
measurement techniques, and tools. Network, IEEE, 17(6):27–35, Nov 2003.
[120] R. project team. Ryu SDN Controller. http://osrg.github.io/ryu/.
[121] T. project team. Trema SDN Controller. http://trema.github.io/trema/.
[122] M. K. Rafsanjani, S. Asadinia, and F. Pakzad. A Hybrid Routing Algorithm Based on
Ant Colony and ZHLS Routing Protocol for MANET, pages 112–122. Springer Berlin
Heidelberg, Berlin, Heidelberg, 2010.
[123] D. Raychaudhuri, I. Seskar, M. Ott, S. Ganu, K. Ramachandran, H. Kremo, R. Sira-
cusa, H. Liu, and M. Singh. Overview of the orbit radio grid testbed for evaluation
of next-generation wireless network protocols. In Wireless Communications and Net-
working Conference, 2005 IEEE, volume 3, pages 1664–1669. IEEE, 2005.
[124] R. Riggio, C. Sengul, L. Suresh, J. Schulz-Zander, and A. Feldmann. Thor: Energy
programmable wifi networks. In Computer Communications Workshops (INFOCOM
WKSHPS), 2013 IEEE Conference on, pages 21–22. IEEE, 2013.
[125] F. Rossi, P. V. Beek, and T. Walsh. Handbook of constraint programming, volume 1.
Elsevier, UK, 2006.
[126] C. L. H. Rutgers. An introduction to igrp. The State University of New Jersey, Center
for Computers and Information Services, Laboratory for Computer Science Research,
1991.
[127] S. Salsano, G. Siracusano, A. Detti, C. Pisa, P. L. Ventre, and N. Blefari-Melazzi.
Controller selection in a wireless mesh SDN under network partitioning and merging
scenarios. CoRR, abs/1406.2470, 2014.
[128] J. Schulz-Zander, C. Mayer, B. Ciobotaru, S. Schmid, and A. Feldmann. Opensdwn:
programmatic control over home and enterprise wifi. In Proceedings of the 1st ACM
SIGCOMM Symposium on Software Defined Networking Research, page 16. ACM,
2015.
150
Bibliography
[129] J. Schulz-Zander, N. Sarrar, and S. Schmid. Aeroflux: A near-sighted controller ar-
chitecture for software-defined wireless networks. In Presented as part of the Open
Networking Summit 2014 (ONS 2014), 2014.
[130] M. Seyedzadegan, M. Othman, B. M. Ali, and S. Subramaniam. Wireless mesh net-
works: Wmn overview, wmn architecture. In International Conference on Communi-
cation Engineering and Networks (IPCSIT), volume 19, pages 12–18, 2011.
[131] S. Shenker. The Future of Networking, the Past of Protocols. ONS, 2011.
https://www.youtube.com/watch?v=YHeyuD89n1Y.
[132] R. Sherwood, G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, N. McKeown, and
G. Parulkar. Flowvisor: A network virtualization layer. OpenFlow Switch Consortium,
Tech. Rep, 2009.
[133] S. Soltesz, H. Pötzl, M. E. Fiuczynski, A. Bavier, and L. Peterson. Container-based
operating system virtualization: a scalable, high-performance alternative to hypervi-
sors. In ACM SIGOPS Operating Systems Review, volume 41, pages 275–287. ACM,
2007.
[134] J. Strauss, D. Katabi, and F. Kaashoek. A measurement study of available bandwidth
estimation tools. In Proceedings of the 3rd ACM SIGCOMM Conference on Internet
Measurement, IMC ’03, pages 39–44, New York, NY, USA, 2003. ACM.
[135] M. Suñé, L. Bergesio, H. Woesner, T. Rothe, A. Köpsel, D. Colle, B. Puype, D. Sime-
onidou, R. Nejabati, M. Channegowda, et al. Design and implementation of the ofelia
fp7 facility: the european openflow testbed. Computer Networks, 61:132–150, 2014.
[136] L. Suresh, J. Schulz-Zander, R. Merz, A. Feldmann, and T. Vazao. Towards pro-
grammable enterprise wlans with odin. In Proceedings of the First Workshop on Hot
Topics in Software Defined Networks, HotSDN ’12, pages 115–120, New York, NY,
USA, 2012. ACM.
[137] A. Tootoonchian, S. Gorbunov, Y. Ganjali, M. Casado, and R. Sherwood. On controller
performance in software-defined networks. In USENIX Workshop on Hot Topics in
Management of Internet, Cloud, and Enterprise Networks and Services (Hot-ICE),
volume 54, 2012.
151
Bibliography
[138] A. Vishnoi, R. Poddar, V. Mann, and S. Bhattacharya. Effective switch memory man-
agement in openflow networks. Proceedings of the 8th ACM International Conference
on Distributed Event-Based Systems, pages 177–188, 2014.
[139] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
C. Barb, and A. Joglekar. An integrated experimental environment for distributed sys-
tems and networks. ACM SIGOPS Operating Systems Review, 36(SI):255–270, 2002.
[140] Wireshark. Wireshark. https://www.wireshark.org/.
[141] J. Yan and D. Jin. Vt-mininet: Virtual-time-enabled mininet for scalable and accu-
rate software-define network emulation. In Proceedings of the 1st ACM SIGCOMM
Symposium on Software Defined Networking Research, page 27. ACM, 2015.
[142] F. Yang, V. Gondi, J. O. Hallstrom, K.-C. Wang, and G. Eidson. Openflow-based load
balancing for wireless mesh infrastructure. In 2014 IEEE 11th Consumer Communi-
cations and Networking Conference (CCNC), pages 444–449. IEEE, 2014.
[143] M. Yang, Y. Li, D. Jin, L. Su, S. Ma, and L. Zeng. Openran: a software-defined ran
architecture via virtualization. In ACM SIGCOMM computer communication review,
volume 43, pages 549–550. ACM, 2013.
[144] K.-K. Yap, M. Kobayashi, R. Sherwood, T.-Y. Huang, M. Chan, N. Handigol, and
N. McKeown. Openroads: Empowering research in mobile networks. SIGCOMM
computer communication review, 40(1):125–126, Jan. 2010.
[145] W. Yin, P. Hu, J. Indulska, M. Portmann, and J. Guerin. Robust mac-layer rate control
mechanism for 802.11 wireless networks. In Local Computer Networks (LCN), 2012
IEEE 37th Conference on, pages 419–427. IEEE, 2012.
[146] D. Zeng, P. Li, S. Guo, T. Miyazaki, J. Hu, and Y. Xiang. Energy minimization
in multi-task software-defined sensor networks. IEEE Transactions on Computers,
64(11):3128–3139, Nov 2015.
[147] J. Zhou, Z. Ji, M. Varshney, Z. Xu, Y. Yang, M. Marina, and R. Bagrodia. Whynet: a
hybrid testbed for large-scale, heterogeneous and adaptive wireless networks. In Pro-
ceedings of the 1st international workshop on Wireless network testbeds, experimental
evaluation & characterization, pages 111–112. ACM, 2006.
152
Bibliography
[148] D. Zhu, X. Yang, P. Zhao, and W. Yu. Towards effective intra-flow network coding in
software defined wireless mesh networks. In 2015 24th International Conference on
Computer Communication and Networks (ICCCN), pages 1–8. IEEE, 2015.
[149] R. Zurawski. Industrial communication technology handbook. CRC Press, 2005.
153