cluster and grid computing lab, huazhong university of science and technology, wuhan, china...
TRANSCRIPT
Cluster and Grid Computing Lab, Huazhong University of Science and Technology, Wuhan, China
Supporting VCR Functions in P2P VoD Services
Using Ring-Assisted Overlays
Bin Cheng, Hai Jin, Xiaofei Liao
Cluster and Grid Computing LabServices Computing Technology and System Lab
Huazhong University of Science and Technology, Wuhan, China
Motivation
• VoD is attractive, But cost– YouTube
• about 20 million views a day• 200TB per day• 1~2 million $ per month payed for the required bandwidth
• P2P is successful in file sharing and live streaming– BitTorrent– AnySee, CoolStreaming
• Can we provide VoD services based on similar P2P technologies?– Yes, but it is challenging!
Challenges
• Characteristics of VoD – Frequent joining/leaving
• user can watch any videos when at any time• many short duration time, over 40% < 10 minutes
– Random seek • not uncommon in an Internet VoD, particularly for long videos• 60% sessions have VCR operations• average number of seek ~= 5
All of these make the overlay of P2P VoD harder to be organized
0 600 1200 1800 2400 3000 3600 4200 4800 5400 6000 6600 72000
10
20
30
40
50
60
70
80
90
100
CD
F(%
)
duration for each view (seconds)
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100
CD
F(%
)
number of seek
Related work
• Tree-based overlay– P2Cast , P2VoD
• load imbalance, the root node has to process more queries
• Mesh-based overlay– CoolStreaming
• randomly organized• inefficient query for seeks
• Hybrid overlay [Tree + Mesh]– Tree-Assistant Gossip
• complicated algorithms to maintain overlay• consistency is not easily obtained
• What we do?RINDY : A ring-assisted overlay for P2P VoDRINDY : A ring-assisted overlay for P2P VoD
Outline
• Basic Architecture• RINDY overlay• Overlay maintenance
– how to construct?– how to maintain?– how to support random seeks?
• Evaluation– Overhead– Comparison with P2VoD
• Conclusions
Basic Architecture
• Tracker– a well-known rendezvous point
– a normal peer with the fixed position zero
– bootstrap newly joined peers
• Source Servers– store all videos, which are divided into a series of timeslots, each with
one second content
– provide media data for peers that can not find data from other peers
• Peers– each peer node maintains a sliding buffer window to manage the most
recently received timeslots
– two peers can share data only if their buffer windows are overlapped
RINDY Overlay (1)
• Structure of ring in RINDY– Each peer keeps a set of concentric, non-overlapping logical
rings with power-law radii to organize its neighborhood• Based on the relative playing position dj = curj – curi
• The radius of the ith ring is a*2i , a = the size of buffer window
– gossip-ring, the innermost ring, keeping some near-neighbors with relative small distances
• To improve sharing between those peers with close playing positions and overlapped buffer windows
– skip-rings, the outer rings, sampling some far-neighbors with different distances
• To aggregate the speed of lookup operations and reduce the load of the tracker server by sampling some far-neighbors
Overlay Structure (2)
• An example• buffer window size = 5 minutes
• the time length of the current movie = 45 minutes
t = 45:00
15:30
0
18:20
11:40
16:20
gossip-ring
skip-rings
Overlay Maintenance (1)
• How to construct?
tracker
Overlay Maintenance (2)
• How to maintain?– scoped gossip over rings to discover new close peers– Method: each peer announce its position to near-neighbors
periodically . The gossip messages will be discarded in three cases: 1) out of scope; 2) TTL = 0; 3) loop back
No next hop
The next hop is out of scope
gossip ring
TTL = 0
Gossip messages are only disseminated between close peers
Overlay Maintenance (2)
• Multiple Layer– Promote
– Member Neighbor (gossip over rings) Partner (sharing data)– To promote better candidates to the upper layer– To accommodate the dynamics of overlay topology– To improve the quality of streaming
– Check– Partner will be dropped if no data are shared for a pre-define time– Neighbor will be delete if a pre-define time no update
mCache
Neighbor List
Partner List
TrackerInitial peer list
First Layer
Second Layer
Third Layer
Ring Assisted Control Overlay
Data Overlay
select
select
exclude
drop
update
Overlay Maintenance (3)
• How to look up new partners?• Step 1: send a query to look up some peers close to the destination d• Step 2: each time, forward this query the closest router, until, |P – d| < W
(buffer window size)• Step 3: get the reply include new near-neighbors and far-neighbors
t
New neighborhood
Simulation Setup (1)
• Topology generated by GT-ITM– 1000 peer nodes based on transit-stub model– 3 transit domains, each with 5 transit nodes– a transit node is connected to 6 stub domains, each with 12 stub
nodes
• Bandwidth– between two transit nodes, 100Mbps– a transit node and a stub node, 10 Mbps– heterogeneous upload bandwidths of stub nodes, including
4Mbps, 1Mbps, and 512Kbps
• Movie– a movie with 60 KB/s streaming rate, 60 minutes content
Simulation Setup (2)
• Parameter list
Parameter Value and description
w 300 seconds, buffer window size
B 5, maximum ring number
a 300, radius coefficient of rings
TTL 5, maximum hop number for gossip messages
t 30 seconds, gossip period
m 15, number of near-neighbors in each gossip-ring
k 1, number of front far-neighbors in each skip-ring
l 1, number of back far-neighbors in each skip-ring
n 10, number of partners
Tp 3 minutes, checking period of partners in partner list
Tm 5 minutes, checking period of members in mCache
T 3600 seconds, length of a sample movie
Simulation Results
• Control Overhead (1) • the overhead increases very slowly after the total number of peers
reaches 400
• each peer only needs to process about 5 messages in a second
42
103
119
133
137 146
0 200 400 600 800 10000
20
40
60
80
100
120
140
160
180
Mess
age N
um
ber
Node Number
ANNOUNCE FORWARD EXCHANGE
Simulation Results
• Control Overhead (2) • 10 percent of peers join, leave the network or perform VCR
operations randomly
• When the total number of peers reaches about 400, the average control overhead to accommodate the overlay changes keeps at a constant level
control overhead caused by peer dynamics is independent of the size of network
Simulation Results
• Control Overhead (3) • normal gossip
• scoped gossip over rings
0 200 400 600 800 10000
20
40
60
80
100
120
140
160
180
Mess
age N
um
ber
Node Number
normal gossip ring gossip
a reduction of control overhead of 15-66% obtained by our gossip protocol
Simulation Results
• Server stress (1)– To examine the stress of source server under different arrival
rates• a lower arrival rate, leads to a higher server
• between 0.1 and 10, remains unchanged
• beyond 10, rises noticeably with the increase of arrival rate
0
200
400
600
800
1000
Se
rve
r S
tre
ss (
Str
ea
m N
um
be
r)
Arrival Rate
Simulation Results
• Server stress (2)– To examine the stress of source server under different buffer
window sizes• for a low arrival rate, the increase of buffer window size significantly
improves the performance
• for popular channels, a larger buffer window can not bring more benefits
0 200 400 600 800 1000 1200
0
50
100
150
200
250
300
350
400
Ser
ver
Str
ess
( S
trea
m N
umbe
r)
Buffer Window Size
Arrival Rate = 1 Arrival Rate = 0.01
RINDY vs. P2VoD (1)
• Server stress (3)• for P2VoD, the server stress increases when the number of nodes
increases due to the inefficient upload bandwidth utilization
RINDY vs. P2VoD (2)
• Quality of streaming• Evaluated by the average timeslot missing rate (TMR)
• randomly stop some joined peers at a speed of 2 peers per minute
RINDY achieves better reliability by using gossip protocol over rings and retrieving data packets from multiple partners
RINDY vs. P2VoD (3)
• Latency• In P2VoD, a peer takes more time to join the overlay and the joining
latency increases with the number of nodes
• In contrast to 2VoD, RINDY significantly decreases the latency of random seek
0 200 400 600 800 10000
2
4
6
8
10
12
14
16
18
20
Ave
rag
e H
op
Node Number
RINDY P2VoD
0 200 400 600 800 1000
0
5
10
15
20
Ave
rage
hop
s
Node Number
RINDY P2VoD
RINDY vs. P2VoD (4)
• Load balance• RINDY obtain better load balance among peers during VCR
operations
• Main reasons behind this result– For the tree-based overlay network, all lookup operations start from the
root node– For RINDY, all lookup operations start from the local rings
0 100 200 300 400 500 6000
10
20
30
40
50
60
Look
up M
essa
ge N
umbe
r
Node ID
P2VoDRINDY
Conclusions
• A novel ring-assisted overlay, namely RINDY, is potential to provide a large scale P2P VoD service.
• RINDY can efficiently support VCR functions, particularly random seeks.
• By keeping all neighbors in concentric rings with power law radii, each peer can fast locate new partners.
• A scoped gossip is used to discover new help RINDY decrease.
• Week consistency originated from gossip makes RINDY more reliable than tree-based overlay
• Simulation results show that RINDY outperforms P2VoD in terms of load balance, join and seek latency.
http://www.gridcast.cn
Thank you!Q&A
The End