dynamic adaptive streaming over http (dash) application protocol : modeling and...
TRANSCRIPT
Dynamic Adaptive Streaming over HTTP (DASH) Application Protocol : Modeling and Analysis
Dr. Jim Martin
Associate Professor
School of Computing
Clemson University
http://www.cs.clemson.edu/~jmarty
1
Talk Overview
2
•Introduction •Background
•Related work and problem formulation
•Methodology
•Results and analysis
•Conclusions
•Future work
Introduction
3
• We’ve all heard about Netflix.
• We know Netflix consumes a significant portion of downstream
bandwidth.
• We know that a standards-based approach for adaptive HTTP-
based streaming is likely to be widely deployed.
Talk Overview
4
•Introduction
•Background •Related work and problem formulation
•Methodology
•Results and analysis
•Conclusions
•Future work
Background
5
0 10 20 30 40 50 60 70 800
10
20
30
40
50
60
70
Time (seconds)
Thro
ughput
(Mbps)
TCP Cx 1
1.0 second samples
10 second samples
6
Buffering state
(37.02 Mbps between
10-30 seconds)
Steady state
(4.39 Mbps)
Throughput Time Scale
1 second
10 seconds
10.000000 30.000000 2 0 37024890 12120734 0.00
30.000000 80.000000 5 1 4388092 674250 0.00
Background
Background
7
• Simulation Model
Background
8
• Playback buffer capacity: This is configured in units of seconds. The client issues
requests to maintain the playback buffer within an operating range defined by two
internal parameters, Highwatermark and Lowwatermark. The default setting of the
playback buffer capacity is 90 seconds.
• Number of outstanding client requests: This determines the maximum number of
requests that can be outstanding at any given time. The default setting is 2
segments.
• Segment size: This determines the granularity of the data exchanges between the
DASH server and the client. The default is 2 seconds.
• Adaptation Threshold: This tunes the client’s sensitivity to changes in observed
throughput.
• Discrete bitrate encoder options: The range of possible bitrate encoder values is set
as follows (in units of bps)
• Bitrate Encoder Value Options: 768000, 1500000, 2200000, 2600000, 3200000,
3800000, 4200000, 4800000
Talk Overview
9
•Introduction
•Background
•Related work and problem formulation •Methodology
•Results and analysis
•Conclusions
•Future work
Related Work
10
• Standards [ISO11, 3GPP12] and basic overviews [STO11,SOD11]
• Basic assessments : [ABD11,AN11, LMT12, RLB11, KKH11]
• Improvements : [ACH12, CMP11, EKG11, LBG11, SSH11
• Metrics : [OS12, DAJ11]
• Smartphone specifics : [NEE11A, NEE11B, XST09]
• Most recent provides insight in how multiple CDN’s are used:
“Unreeling Netflix: Understanding and Improving Multi-CDN Movie
Delivery”
Problem Formulation
11
• We wanted to characterize the bandwidth consumption and
behaviors of an early DASH implementation
• Are there differences across a range of client devices?
• Open questions are:
• How does application level control co-exist with TCP control?
• Does it give up too much bandwidth?
• Is it well behaved and stable during periods of volatile
network conditions?
• What does an ‘optimal design’ mean?
• What are the main design parameters at the client that impact an
‘optimal design’?
Talk Overview
12
•Introduction
•Background
•Related work and problem formulation
•Methodology •Results and analysis
•Conclusions
•Future work
Methodology
13
•Measurement study - conduct controlled experiments using live
Netflix sessions.
• Analyze tcpdump packet captures
•Simulation study - developed a simulation model of a DASH
application in ns2
• Used simulation to better understand empirical results
• Used simulation to better understand impact of the algorithms
and design of the player
Measurement Testbed
Measurement Scenarios
15
Scenario ID Description
1 Ideal with 3% artificial loss (5 minutes no impairment, 5 minutes 3% loss, 5 minutes no impairment)
2 Ideal with 40% artificial loss (5 minutes no impairment, 5 minutes 40% loss, 5 minutes no impairment)
3 Stepped loss (200 seconds at 0% loss, 100 seconds at 5% loss, 100 seconds at 10% loss, 100 seconds at 0% loss)
4 Chaotic loss (300 seconds at 0% loss, 300 seconds at variable loss, 300 seconds at 0% loss)
5 Competing flows (5 minutes no competing TCP, 5 minutes with competing flow, last 5 minutes with no competing TCP) 5-1 : Xbox Wireless device, Downstream competing TCP 5-2 : Roku Wireless device, Downstream competing TCP 5-3 : Xbox Wireless device, Upstream competing TCP 5-4 : Roku Wireless device, Upstream competing TCP
Simulation Network Diagram
16
1000Mbps,
.5ms prop delay
Data rates:
1 Gbps (US and DS)
Node 1
Node n
GW router router
1 Gbps or 10Mbps,
6.5ms prop delay Netflix Servers
Competing Traffic
Generators/Sinks
Simulation Scenarios
17
Scenario ID Description
1 Ideal with 3% artificial loss (5 minutes no impairment, 5 minutes 3% loss, 5 minutes no impairment)
2 Ideal with 40% artificial loss (5 minutes no impairment, 5 minutes 40% loss, 5 minutes no impairment)
3 Stepped loss (200 seconds at 0% loss, 100 seconds at 5% loss, 100 seconds at 10% loss, 100 seconds at 0% loss
4 Chaotic loss (300 seconds at 0% loss, 300 seconds at variable loss, 300 seconds at 0% loss)
5 A mix of netflix sessions and competing TCP flows over a wired network (no cable)
6 A mix of netflix sessions and competing TCP flows over a cable network
Talk Overview
18
•Introduction
•Background
•Related work and problem formulation
•Methodology
•Results and analysis •Conclusions
•Future work
Measurement Results
19
b. Trace 1-2 (Windows wired) d. Trace 1-4 (Roku Wireless)
e. Trace 1-5 (Android Wireless)
a. Trace 1-1 (Xbox wired)c. Trace 1-3 (Xbox wifi)
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds) B
an
dw
idth
Con
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ban
dw
idth
Con
su
mptio
n (
Mbps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
onsu
mption
(M
bps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Band
wid
th C
onsu
mption
(M
bps)
2.0 second samples
50 second samples
Steady State Intervals
1: 50 – 300 seconds
2: 400 – 550 seconds
3: 750 – 900 seconds
Results (in Mbps) :
(mean bandwidth, standard deviation)
(4.32, 0.33) (2.95, 0.19) (2.91, 0.71)(4.44, 0.18) (2.08,0.17) (3.50,0.44)
(4.32, 0.39) (2.71, 0.12) (3.11, 0.78) (4.27, 0.52) (3.35, 0.10) (3.59, 0.67)
(4.25, 0.45) (2.65, 0.21) (3.55, 0.42)
Measurement Results
20
a. Trace 2-1 (Xbox wired) c. Trace 2-3 (Xbox wifi)
b. Trace 2-2 (Windows wired) d. Trace 2-4 (Roku Wireless)
e. Trace 2-5 (Android Wireless)
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
Measurement Results
21
a. Trace 3-1 (Xbox wired) c. Trace 3-3 (Xbox wifi)
b. Trace 3-2 (Windows wired) d. Trace 3-4 (Roku Wireless)
0 100 200 300 400 5000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 5000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 5000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 5000
2
4
6
8
10
12
14
16
18
20
Time (seconds) B
an
dw
idth
Con
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
Measurement Results
22
a. Trace 4-1 (Xbox wired) c. Trace 4-3 (Xbox wifi)
b. Trace 4-2 (Windows wired) d. Trace 4-4 (Roku Wireless)
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
0 100 200 300 400 500 600 700 800 9000
2
4
6
8
10
12
14
16
18
20
Time (seconds)
Ba
nd
wid
th C
on
su
mp
tio
n (
Mb
ps)
2.0 second samples
50 second samples
Measurement Results
23
0 200 400 600 800 10000
10
20 TCP Cx 1
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
2.0 second samples
30 second samples
0 200 400 600 800 10000
10
20 TCP Cx 2
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
0 200 400 600 800 10000
50
TCP Cx 3
Time (seconds) T
hro
ug
hp
ut
(Mb
ps)
a. Trace 5-1 (Xbox Wireless, Downstream TCP) c. Trace 5-2 (Roku Wireless, Downstream TCP)
b. Trace 5-3 (Xbox Wireless, Upstream TCP) d. Trace 5-4 (Roku Wireless, Upstream TCP)
0 100 200 300 400 500 600 700 8000
5
10
15
20 TCP Cx 1
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
2.0 second samples
30 second samples
0 100 200 300 400 500 600 700 8000
20
40
60
TCP Cx 2
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
0 100 200 300 400 500 600 700 8000
10
20 TCP Cx 1
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
2.0 second samples
30 second samples
0 100 200 300 400 500 600 700 8000
10
20 TCP Cx 2
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
0 100 200 300 400 500 600 700 8000
50
TCP Cx 3
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
0 100 200 300 400 500 600 700 8000
5
10
15
20 TCP Cx 1
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
2.0 second samples
30 second samples
0 100 200 300 400 500 600 700 8000
20
40
60
TCP Cx 2
Time (seconds)
Thro
ug
hp
ut
(Mb
ps)
Measurement Results: Conclusions
24
•The basic response to network congestion is similar across the
devices that were studied. However, the steady state bandwidth
consumption achieved by different devices during periods of
sustained congestion varied as did the details of how the client
consumed available bandwidth following periods of congestion.
•The playback buffer sizes ranged from 30 seconds (on the
Android WiFi device) to 4 minutes (on the Windows Wired
device). The size of the playback buffer is crucial in masking the
effects of network congestion from the perceived quality.The
average size of the client request ranged from 1.8 MB to 3.57
MB (we did not show these results in the paper).
•The Netflix adaptation appears to default to TCP control during
periods of heavy, sustained network congestion. However, the
application algorithm is clearly intertwined with TCP control
during periods of volatile network conditions.
Simulation Results
25 •Measurement Simulation
Simulation Results
26
Simulation Results
27
Simulation Results: Conclusions
28
•We see similar behaviors between measurement and simulation
experiments. Some issues we are looking into:
• Max netflix rate in simulation 4.8 Mbps while
measurement results show <4.4 Mbps
• TCP throughput in simulation model (without Netflix) is
lower than what we observe on Linux systems
•Confirmed fairness and expected behaviors when multiple flows
competed for bandwidth
Talk Overview
29
•Introduction
•Background
•Related work and problem formulation
•Methodology
•Results and analysis
•Conclusions
•Future work
Conclusions and Next Steps
30
•Saw similar basic behaviors across devices, but differences in
details
• Further work to confirm if due to stack differences or
Netflix implementation differences
•Netflix is very well behaved- aggressively drops bandwidth to
below ‘TCP fair’ levels.
• Perhaps too conservative in how it uses bandwidth that
becomes available
•Very difficult to address the issue of ‘is the adaptation doing the
right thing’ without taking into account perceived quality
•Next steps
• User study
• Focus on fairness issues
• Focus on ‘predicting future bandwidth’
• Focus on enhancement to TCP that provides incentives
for very well behaved applications