virtualizing a wireless network: the time-division approach suman banerjee, anmol chaturvedi, greg...

Post on 15-Jan-2016

219 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Virtualizing a Wireless Network:The Time-Division Approach

Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh MishraContact email: suman@cs.wisc.edu

http://www.cs.wisc.edu/~suman

Department of Computer SciencesUniversity of Wisconsin-Madison

Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory

Virtualizing a wireless network

• Virtualize resources of a node

• Virtualize the medium– Particularly critical in wireless environments

• Approaches• Time• Frequency• Space• Code Courtesy:

ORBIT

Virtualizing a wireless network

• Virtualize resources of a node

• Virtualize the medium– Particularly critical in wireless environments

• Approaches• Time• Frequency• Space• Code

Time

Space, Freq, Code, etc.Expt-1Expt-2Expt-3Expt-1Expt-2Expt-3

TDM-based virtualization

• Need synchronous behavior between node interfaces– Between transmitter and receiver– Between all interferers and receiver

A B

C D

A B

A B

Expt-1 Expt-1

Expt-2Expt-1

A B

C D

Problem statement

To create a TDM-based virtualized wireless environment as an intrinsic capability in GENI

• This work is in the context of TDM-virtualization of ORBIT

Current ORBIT schematic

Controller

nodeHandlerUI

NodeNode

Node

Node

nodeAgent

• Manual scheduling• Single experiment on grid

Controller

nodeHandler

nodeHandler

nodeHandler

UI

Master Overseer

Our TDM-ORBIT schematic

Node

VMnodeAgent

VMnodeAgent

VMnodeAgent

Node Overseer

• Virtualization: abstraction + accounting• Fine-grained scheduling for multiple expts on grid• Asynchronous submission

VM =UserModeLinux

Controller

Overseers

UIexperiment

queue

scheduler

submit

Node

Node Overseer

monitor

handler

handler

handler

reportin

g

feed

back

mcastcommands

Mas

ter

Ove

rsee

r

Master overseer:Policy-maker that governs the grid

Node overseer:- Add/remove experiment VMs-Swap experiment VMs Monitor node health and experiment statusMostly mechanism, no policy

Virtualization

• Why not process-level virtualization?– No isolation

• Must share FS, address space, network stack, etc.

– No cohesive “schedulable entity”

• What other alternatives are there?– Other virtualization platforms (VMware, Xen,

etc.)

TDM: Virtualization

• Virtualization– Experiment runs inside a

User-Mode Linux VM

• Wireless configuration– Guest has no way to read

or set wifi config!

– Wireless extensions in virtual driver relay ioctls to host kernel

Node

Host Kernel

net_80211

Guest VM

UML Kernel

virt_net

iwconfig

ioctl()

tunneled ioctl()

Node

TDM: Routing ingress

eth

192.169.x.y

wifi

mrouted

experimentchannel

nodeHandlercommands(multicast)

iptables

DNAT: 192.169 -> 192.168

RoutingTable VM

VM

VM

forwarded toall VMs in

mcast group

192.168.x.y10.10.x.y

Synchronization challenges

• Without tight synchronization, experiment packets might be dropped or misdirected

• Host: VMs should start/stop at exactly the same time– Time spent restoring wifi config varies– Operating system is not an RTOS– Ruby is interpreted and garbage-collected

• Network latency for overseer commands– Mean: 3.9 ms, Median: 2.7 ms, Std-dev: 6 ms

• Swap time between experiments

Synchronization: Swap time I

• Variables involved in swap time– Largest contributor: wifi configuration time

• More differences in wifi configuration = longer config time

– Network latency for master commands– Ruby latency in executing commands

Synchronization: Swap Time II

• We can eliminate wifi config latency and reduce the effects of network and ruby latencies

• “Swap gaps”– A configuration timing buffer– VMs not running, but incoming packets are still received and

routed to the right place

Ruby Network Latency

• Inside VM, Ruby shows anomalous network latency– Example at right:

tcpdump and simple ruby recv loop

– No delays with C– Cause yet unknown

00.000 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3000.035 received 30 bytes01.037 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 3001.065 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 5601.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4001.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4501.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4411.018 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3012.071 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4523.195 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3024.273 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4526.192 received 30 bytes34.282 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3035.332 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4540.431 received 56 bytes40.435 received 40 bytes40.438 received 45 bytes40.450 received 44 bytes40.458 received 30 bytes40.462 received 45 bytes40.470 received 30 bytes40.476 received 45 bytes40.480 received 30 bytes40.484 received 45 bytes

24+ secs

UI screen shots

Time slice 1

Time slice 2

Performance: Runtime Breakdown

• Booting a VM is fast• Each phase slightly

longer in new system– Ruby network delay

causes significant variance in data set

– Handler must approximate sleep times

Performance: Overall Duration

• Advantages– Boot duration

• Disadvantages– Swap gaps

Future work: short term

• Improving synchrony between nodes– More robust protocol– Porting Ruby code to C, where appropriate

• Dual interfaces– Nodes equipped with two cards– Switch between them during swaps, so that

interface configuration can be preloaded at zero cost

Dual interfaces

wifi0Essid: “expA”Mode: BChannel: 6

wifi1Essid: “expB”Mode: GChannel: 11

VM nodeAgent

VM nodeAgent Routing

Logic

VM nodeAgent

Node Overseerco

nfig

“currentcard is…”

Future work: long term

• Greater scalability– Allow each experiment to use, say 100s of

nodes, to emulate 1000s of nodes

– Intra-experiment TDM virtualization

– Initial evaluation is quite promising

Intra-experiment TDM

Any communication topology can be modeled as a graph

Intra-experiment TDM

We can emulate all communication on the topology accurately, as long as we can emulate the reception behavior of the node with the highest degree

Intra-experiment TDM

Time-share of different logical nodes to physical facility nodes

Testbed of 8 nodes Time

Unit 1

Testbed of 8 nodes Time

Unit 2Time-share of different logical nodes to

physical facility nodes

Intra-experiment TDM

Testbed of 8 nodes Time

Unit 3Time-share of different logical nodes to

physical facility nodes

Intra-experiment TDM

Some challenges

• How to perform the scheduling?– A mapping problem

• How to achieve the right degree of synchronization?– Use of a fast backbone and real-time approaches

• What are the implications of slowdown?– Bounded by the number of partitions

Conclusions

• Increased utilization through sharing

• More careful tuning needed for smaller time slices– Need chipset vendor support for very small times

• Non real-time apps, or apps with coarse real-time needs are best suited to this virtualization approach

top related