aws re:invent - med305 achieving consistently high throughput for very large data transfers with...

38
Michelle Munson - Co-Founder & CEO, Aspera Jay Migliaccio - Director of Cloud Technologies, Aspera Stéphane Houet – Product Manager, EVS Broadcast Equipment

Upload: asperasoft

Post on 13-Jul-2015

142 views

Category:

Technology


12 download

TRANSCRIPT

Michelle Munson - Co-Founder & CEO, Aspera Jay Migliaccio - Director of Cloud Technologies, Aspera Stéphane Houet – Product Manager, EVS Broadcast Equipment

PRESENTERS

Michelle Munson Co-founder and CEO Aspera [email protected]

Jay Migliaccio Director Cloud Services, Aspera

[email protected] Stephane Houet Product Manager, EVS [email protected]

•  Quick Intro to Aspera

•  Technology Challenges

•  Aspera Direct to Cloud Solution

•  Demos

•  FIFA Live Streaming Use Case

•  Q & A

AGENDA

HTTP

Key Value

H(URL) R1, R2, R3

Data Nodes Master Database Mapping Object ID to File Data Replicas and Storing Metadata

With Direct-to-CLOUD

TRANSFER DATA TO CLOUD OVER WAN EFFECTIVE THROUGHPUT

Multi-part HTTP

Typical internet conditions

•  50–250ms latency & 0.1–3% packet loss

15 parallel http streams

<10 to 100 Mbps

depending on distance

Aspera FASP Aspera FASP transfer over WAN to Cloud Up to 1Gbps *

10 TB transferred per 24 hours

* Per EC2 Extra Large Instance -independent of distance

LOCATION AND AVAILABLE BANDWIDTH AWS ENHANCED UPLOADER ASPERA FASP

Montreal to AWS East

•  100 Mbps Shared Internet Connection

30 minutes (7-10 Mbps)

3.7 minutes (80 Mbps) 9X Speed Up

Rackspace in Dallas to AWS East

•  600 Mbps Shared Internet Connection

7.5 minutes (38 Mbps)

0.5 minutes (600 Mbps) 15X Speed Up

Other pains … “Enhanced Bucket Uploader” requires java applet, very large transfers time out, no good resume for interrupted transfers, no downloads

EFFECTIVE THROUGHPUT & TRANSFER TIME FOR 4.4 GB/15691 FILES (AVERAGE SIZE 300KB)

LOCATION AND AVAILABLE BANDWIDTH AWS HTTP MULTIPART ASPERA ASCP

New York to AWS East Coast

•  1 Gbps Shared Connection 334 seconds (113 Mbps)

107 seconds (353 Mbps) 3.3X Speed Up

New York to AWS West Coast

•  1 Gbps Shared Connection 8.7 GB in 1032 seconds (36 Mbps)

8.7 GB in 110 seconds (353 Mbps)

9.4 X Speed Up

EFFECTIVE THROUGHPUT & TRANSFER TIME FOR 8.7 GB/18,995 FILES (AVERAGE SIZE 9.6MB)

LOCATION AND AVAILABLE BANDWIDTH AWS HTTP MULTIPART ASPERA ASCP

New York to AWS East Coast

•  1 Gbps Shared Connection 477 seconds (156 Mbps)

178 seconds (420 Mbps) 2.7 X Speed Up

New York to AWS West Coast

•  1 Gbps Shared Connection 967 seconds (77 Mbps)

177 seconds (420 Mbps)

5.4 X Speed Up

– Maximum speed single stream transfer

– Support for large files and directory sizes in a single transfer

– Network and disk congestion control provides automatic adaptation of transmission speed to avoid congestion and overdrive

– Automatic retry and checkpoint resume of any transfer from point of interruption

– Built in over-the-wire encryption and encryption-at-rest (AES 128)

– Support for authenticated Aspera docroots using private cloud credentials and platform-specific role based access control including Amazon IAMS.

– Seamless fallback to HTTP(s) in restricted network environments

– Concurrent transfer support scaling up to ~50 concurrent transfers per VM instance

Utilization > high w/m Available Pool

New Clients connect to “available” pool

Existing client transfers

Console • Collect / aggregate transfer data

• Transfer activity / reporting (UI, API)

Shares • User management • Storage access control

KEY COMPONENTS

•  Cluster Manager for Auto-scale and Scaled DB

•  Console Management UI + Reporting API •  Enhanced Client for Shares

Authorizations •  Unified Access to Files/Directories

(Browser, GUI, Commend Line, SDK)

Scaling Parameters •  Min/max number of t/s •  Utilization low/high

watermark •  Min number of t/s in

“available” pool •  Min number of idle t/s in

”available” pool

Management and Reporting

Cluster Manager •  Monitor cluster nodes •  Determine eligibility for

transfer scale up / down •  Create / remove db with

replicas •  Add / remove node

Scale DB Persistence Layer

.mp2ts

HLS adaptive bitrate

FASPStream

•  Near Live experiences have highly bursty processing and distribution requirements

•  Transcoding alone is expected to generate 100s of varieties of bitrates and formats for a multitude of target devices

•  Audiences peak to millions of concurrent streams and die off shortly post event

•  Near “Zero Delay” in the video experience is expected

•  “Second screen” depends on near instant access / instant replay, which requires reducing

•  Linear transcoding approaches simply can not meet demand (and are too expensive for short term use!)

•  Parallel, “cloud” architectures are essential

•  Investing in on premise bandwidth for distribution is also impractical

•  Millions of streams equals terabits per second

27

FASP

Scale Out High-Speed Transfer by

Aspera

Scale Out Transcoding by

Elemental On Demand

Multi-screen capture and distribution

by EVS

Belgian company

+90% market share of sports

OB trucks

21 offices

+500 employees

(+50% in R&D)

With the kind permission of HBS

Live Streaming : REAL TIME CONSTRAINT!

6 feeds @ 10 Mbps = 60 Mbps

X 2 games at the same time

X 2 for safety

WE NEED A SOLUTION !

240 Mbps

VOD Multicam Near-live replays :

Up to 24 clips @ 10 Mbps = 240 Mbps

X 2 games at the same time

480 Mbps Maximum Throughput (bps) =

TCP-Window-Size (b) / Latency (s)

(65535 * 8) / 0.2 s =

2621400 bps = 2.62 Mbps

6 Live Streams HLS streaming of 6 HD streams to

tablets & mobiles per match

+20 Replay cameras On-demand replays of selected events

from up to 20+ cameras on the field

+4000 VoD elements Exclusive on-demand multimedia

exclusive edits

35

FASP

Scale Out High-Speed Transfer by

Aspera

Scale Out Transcoding by

Elemental On Demand

Multi-screen capture and distribution

by EVS

+ 27 TB of video data Key Metrics Total over 62 games

Average per Game

Transfer Time (in hours) 13,857 216

Number of GB Transferred 27,237 426

Number of Transfers 14,073 220

Number of Files Transferred 2,706,922 42,296

< 14,000 hrs video transferred

200 ms of latency over WAN

10% packet loss over WAN

Live Streams

660,000 Minutes

Transcoded Output x 4.3 =

2.8 Million Minutes

Delivered Streams x 321 =

15 Million Hours 35 Million

Unique Viewers