uber mobility - high performance networking

73
Uber Networking: Challenges and Opportunities Ganesh Srinivasan & Minh Pham

Upload: dhaval-patel

Post on 12-Apr-2017

1.282 views

Category:

Mobile


6 download

TRANSCRIPT

Page 1: Uber mobility - High Performance Networking

Uber Networking: Challenges and OpportunitiesGanesh Srinivasan & Minh Pham

Page 2: Uber mobility - High Performance Networking
Page 3: Uber mobility - High Performance Networking

Where do users need Uber the most?

Rider Application

Partner Application

On The Road and On The Go

Page 4: Uber mobility - High Performance Networking

Last-Mile LatencyLatency latency everywhere

Control Plane Latency

User Plane Latency

Core Network Latency

Internet Routing Latency

Page 5: Uber mobility - High Performance Networking

Last-Mile Latency (cont.)Control and User-Plane Latency

3G 4G

Control Plane 200 - 2500 ms 50 - 100 ms

User Plane 50 ms 5 - 10 ms

Page 6: Uber mobility - High Performance Networking

Last-Mile Latency (cont.)Core Network Latency

LTE HSPA+ HSPA EDGE GPRS

40 - 50 ms 100 - 200 ms 150 - 400 ms 600 - 750 ms 600 - 750 ms

Data from AT&T for deployed 2G - 4G networks

Page 7: Uber mobility - High Performance Networking

HandoversHandovers are seamless, or not?

Handovers between cell towers

Handovers between different networks

On AT&T network, it takes 6.5s to switch from LTE to HSPA+.

Page 8: Uber mobility - High Performance Networking

Dead ZonesWhere’s your coverage?

Loss of connectivity is not the exception but the rule.

More chances for network to become unavailable or transient failure to happen.

Page 9: Uber mobility - High Performance Networking

Real-time InteractionsWhat makes Uber run?

There are a lot of real-time interactions between a rider and a driver.

Most of these interactions have to be real-time to matter.

Page 10: Uber mobility - High Performance Networking
Page 11: Uber mobility - High Performance Networking

CelestialGlobal network heatmap

Location

Time

Carrier

Device

Signal Strength

Latency

Page 12: Uber mobility - High Performance Networking
Page 13: Uber mobility - High Performance Networking
Page 14: Uber mobility - High Performance Networking

Dynamic Network ClientAdapt to any network conditions

Rule based system

● City, Carrier, Device● Fine location, Time

Configure different parameters

● Timeout● Retry● Protocol● Number of connections

Page 15: Uber mobility - High Performance Networking

uTimeoutContext is king

Suggest timeout based on context: location, carrier, time, etc.

Examples

● Dispatch Timeout● Push TTL

Page 16: Uber mobility - High Performance Networking

Suggested Pickup PointsNo more dead zones

Guiding riders and drivers to avoid dead zones.

Integrated with suggested pickup points to create a smoother overall user experience.

Page 17: Uber mobility - High Performance Networking

Prediction and PlanningFuture-time is the new real-time

Advance Route planning

● Connectivity● Handovers● Dead zone

Page 18: Uber mobility - High Performance Networking

Thank you

Proprietary and confidential © 2016 Uber Technologies, Inc. All rights reserved. No part of this document may be

reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any

information storage or retrieval systems, without permission in writing from Uber. This document is intended only for the

use of the individual or entity to whom it is addressed and contains information that is privileged, confidential or otherwise

exempt from disclosure under applicable law. All recipients of this document are notified that the information contained

herein includes proprietary and confidential information of Uber, and recipient may not make use of, disseminate, or in any

way disclose this document or any of the enclosed information to any person other than employees of addressee to the

extent necessary for consultations with authorized personnel of Uber.

Ganesh Srinivasan & Minh PhamMobile Platform, Uber

Page 19: Uber mobility - High Performance Networking

(Later day) Evolution of High Performance Networking in Chromium:Speculation + SPDY→ QUIC

Jim Roskind jar @ chromium.orgOpinions expressed are mine.Presented to Amazon on 5/12/2016

Page 20: Uber mobility - High Performance Networking

Use of High Performance Client-side Instrumentation in Chromium (without explaining how Histograms work in Chrome)Opinions expressed are still mine

Page 21: Uber mobility - High Performance Networking

Who is Jim Roskind

● 7+ years of Chromium development work at Google○ Making Chromium faster… often in/around networking○ Driving and/or implementing instrumentation design/development

● Many years at Netscape, working in/around Navigator○ e.g., Java Security Architect, later VP/Chief Scientist○ Helped to “free the source” of Mozilla

● InfoSeek co-founder○ Implemented Python’s Profiler (used for 20 years!!!)

● Sleight of hand card magician

Page 22: Uber mobility - High Performance Networking

Overview

1. Example of Client Side Instrumentation: Histograms 2. Review of SPDY pros/cons and QUIC3. Instrumentation of Experiments leading to QUIC Protocol Design

a. Include forward-looking QUIC elements (not yet in QUIC!)

Page 23: Uber mobility - High Performance Networking

Example:How long does TCP Connecting take?

● Monitor duration from connection request, until availability for data transfer

○ To see actual instrumentation code, [search for TCP_CONNECTION_LATENCY on cs.chromium.org to find src/net/socket/transport_client_socket_pool.cc]

● In chromium, for your browsing results, visit:○ about:histograms/Net.TCP_Connection_Latency

Page 24: Uber mobility - High Performance Networking

Histogram: Net.TCP_Connection_Latency recorded 3481 samples, average = 301.3ms0 ...11 --O (3 = 0.1%) {0.0%}12 -------------------------------------------------O (132 = 3.8%) {0.1%}14 ------------------------------------------------------------------------O (195 = 5.6%) {3.9%}16 ---------------------------------------------------O (137 = 3.9%) {9.5%}18 -------------------------------------------------O (132 = 3.8%) {13.4%}20 ---------------------------------------------------O (208 = 6.0%) {17.2%}23 -----------------------------------------------O (189 = 5.4%) {23.2%}26 -----------------------------------------O (165 = 4.7%) {28.6%}29 ----------------------------------------O (216 = 6.2%) {33.4%}33 -----------------------------------O (192 = 5.5%) {39.6%}37 --------------------------------O (219 = 6.3%) {45.1%}42 -----------------------------O (199 = 5.7%) {51.4%}48 -------------------O (129 = 3.7%) {57.1%}54 -------------------O (130 = 3.7%) {60.8%}61 --------------O (98 = 2.8%) {64.5%}69 ------------------O (120 = 3.4%) {67.3%}78 ------------------------------O (200 = 5.7%) {70.8%}88 -----------------------------------O (237 = 6.8%) {76.5%}100 ---------------------O (140 = 4.0%) {83.3%}113 ----------------O (110 = 3.2%) {87.4%}128 ----------O (69 = 2.0%) {90.5%}145 -----O (36 = 1.0%) {92.5%}164 -----O (35 = 1.0%) {93.5%}186 --------O (56 = 1.6%) {94.5%}211 ----O (28 = 0.8%) {96.2%}239 -O (10 = 0.3%) {97.0%}271 O (0 = 0.0%) {97.2%}307 -O (5 = 0.1%) {97.2%}348 ...446 -O (4 = 0.1%) {97.4%}505 ...941 O (1 = 0.0%) {97.5%}1065 -O (7 = 0.2%) {97.5%}1206 O (3 = 0.1%) {97.7%}1365 -O (4 = 0.1%) {97.8%}1546 ...2243 O (2 = 0.1%) {97.9%}2540 O (0 = 0.0%) {98.0%}2876 ------O (43 = 1.2%) {98.0%}3256 ...8795 -O (6 = 0.2%) {99.2%}9958 ...20979 -O (5 = 0.1%) {99.4%}23753 -O (4 = 0.1%) {99.5%}26894 -O (5 = 0.1%) {99.7%}30451 ...39037 -O (7 = 0.2%) {99.8%}

TCP over Comcast from my home

Mode 14msMedian 37msMean 301ms97% under 271ms

1.2% around 3 seconds!

...perhaps because Windows retransmits SYN at 3 seconds!

Page 25: Uber mobility - High Performance Networking

Sample of Global TCP Connection Latency on Windows

● Over 9 billion samples in graph● Includes 20% under 15ms

○ Probably preconnections● Mode around 70ms● Median around 60ms

○ Excluding preconnects, median around 80ms● 90% under 300ms● 1% around 3 seconds!?!

Note: change from 11 to 12 ms is a graphical artifact

Page 26: Uber mobility - High Performance Networking

Network Stack EvolutionSample Features Driven By Measurements

● Static page analysis, and DNS Pre-resolution● Speculative race of second TCP connection

○ Most critical on Windows machines

● SDCH (Shared Dictionary Compression over HTTP)○ Historically used and evaluated for Google search

● Simplistic Personalized Machine Learning: Sub-resource Speculation○ Visit about:DNS to see what *your* Chromium has learned about *your* sites!○ DNS pre-resolution of speculated sub-resources○ TCP pre-connection of speculated sub-resources

● MD5 Retirement○ ...only after use became globally infrequent

Page 27: Uber mobility - High Performance Networking

SPDY (HTTP/2): Benefits

● Multiplex multitude of HTTP requests○ Removed HTTP/1 restriction(?) of 6 pending requests

● Multiplexed (prioritized) responses○ Send responses asap (rather than HTTP Pipelining required order○ Server push can send results before being requested!

● Shared congestion control pipeline○ Reduced variance (separate HTTP responses don’t fight)

● Always encrypted (via TLS)

Page 28: Uber mobility - High Performance Networking

SPDY (HTTP/2): Issues

● TCP is slow to connect (SYN… SYN-ACK round trip)○ TCP Fastopen worked to help

● TLS is slow to connect (CHLO SHLO handshakes)○ Snap-start worked to help○ Large certificate chains result in losses and delays

● TCP and TLS have head-of-line (HOL) blocking○ OS requires in-order TCP delivery○ TLS uses still larger encrypted blocks (often with block chaining)

● Congestion Avoidance Algorithms evolve slowly○ 5-15 year trial/deployment cycle

Page 29: Uber mobility - High Performance Networking

QUIC: Improving upon SPDY

● Focus on Latency: 0-RTT Connection with Encryption○ Speculative algorithms collapse together all HELLO messages○ Compressed certificate chains reduce impact of packet loss during connections

● Remove HOL blocking○ Each IP packet can be separately deciphered, and data can be delivered

● Congestion Control Algorithms free to Rapidly Evolve○ Move from OS to application space○ Precise packet loss info via rebundling (improvement over TCP retransmission)○ Algorithms can cater to application, mobile environment, etc. etc.

● More details: QUIC: Design Document and Specification Rationale

Page 30: Uber mobility - High Performance Networking

Reachability Question: Can UDP be used by Chrome users?

● Can UDP packets consistently reach Google??○ Gamers use UDP… but are they “the lucky few” with fancy connections?○ How often is it blocked?

● What size packets should be used?○ Don’t trust “common wisdom”

Page 31: Uber mobility - High Performance Networking

Recording results of experiments:Research for QUIC development

● PMTU (Path Max Transmission Unit) won’t work for UDP○ UDP streams are sessionless, and there is no API to “get” an ICMP response!?○ ...so we needed a good initial estimate of packet sizes for QUIC

● Stand up UDP echo servers around the world○ Test a variety of UDP packet sizes (learn about the “real” world!)○ Use two histograms, recording data for random packet sizes.

■ For each size, number of UDP packets sent by client■ For each size, number of successful ACK responses

● About 5-7% of Chrome users couldn’t reach Google via UDP○ QUIC has to fall-back gracefully to TCP (and often SPDY)

Page 32: Uber mobility - High Performance Networking

QUIC/UDP Connectivity

User based: One vote per user per size

Usage based: One vote per user per 30 minutes of usage

Page 33: Uber mobility - High Performance Networking

Future QUIC MTU gains

● QUIC uses (static / conservative) 1350 MTU size for (IPv4) UDP packets○ Download payload size currently around 1331 bytes of data (per QUIC packet) max

■ 19 bytes QUIC overhead + UDP overhead (28 for IPv4; 48 for IPv6)■ Currently max is around 96.6% efficient for IPv4 (1331 / 1378)

● Instead of relying on PMTU, integrate exploration of MTU into QUIC○ Periodically transmit larger packets, such as padded ACK packets

■ Monitor results, without assuming congestive loss

● Efficiency is important to large data transfers (YouTube? Netflix?)● P2P may allow extreme efficiency, with potential for Jumbo packets

Page 34: Uber mobility - High Performance Networking

How quickly will NAT (Network Address Translation) drop its bindings?

● NAT boxes (e.g., home routers) “understand” TCP, and will warn (reset connection?) when they drop a binding

● NAT boxes don’t “understand” UDP connections○ They can’t notify anything when they drop a NAT binding

● Use an echo server that accepts a delay parameter○ Echo server can “wait” before sending its ACK response

■ See if the NATing router still properly routes response (i.e., has intact binding)○ Evaluate “probability” of success for each delay

■ Use two histograms, with buckets based on delay■ One counts attempts. One counts successes.

Page 35: Uber mobility - High Performance Networking
Page 36: Uber mobility - High Performance Networking

QUIC can control NAT In The Future

● Port Control Protocol (RFC 6887)○ Not deployed today… but QUIC can evolve to use it as it becomes available

Page 37: Uber mobility - High Performance Networking

Creative use of Histogram:Packet loss statistics

● Make 21 requests to a UDP Echo server○ Request that echo server ACK each numbered packet○ Histogram with 21 buckets records arrival of each possible packet number

● Look at impact of pacing UDP packets○ Either “blast” or send at “reasonable pacing rate”

■ “Reasonable pacing” is based on an initial blast to estimate bandwidth

Page 38: Uber mobility - High Performance Networking

Packet 2, in unpaced initial transfer, is almost twice as likely to be lost as packets 1 or 3!?!?! The problem “goes away” after initial transfer.

Without pacing, buffer-full(?) losses commonly appear after 12 or 16 packets are sent.

Pacing improves survival rate for later packets

Page 39: Uber mobility - High Performance Networking

Packet loss statistics:How much does packet size matter?

● Make 21 requests to a UDP Echo server○ Request that echo server ACK each numbered packet○ Histogram with 21 buckets to record arrival of each possible packet number

● Look at impact of packet sizes: ○ 100 vs 500 vs 1200 bytes

Page 40: Uber mobility - High Performance Networking

Smaller 100 byte packets are lost more often initially, and packet 2 is especially vulnerable!

Loss “cliff” at 16 unpaced-packets is independent of packet sizes!

Page 41: Uber mobility - High Performance Networking

Future QUIC Gains around 0-RTT

● 2nd packet is critical to effective 0-RTT connection○ 2.5%+ “extra” probability of losing packet number 2, above and beyond 1-2%○ Redundantly transmit packet 2 contents proactively!

● 1st packet contains critical CHLO (crypto handshake)○ 1-2% probability of that packet being lost (critical path for packet number 2!!!)

● Proactive redundancy in 0-RTT handshake/request gains 5+% reliability○ Uplink channel is underutilized, so redundancy is “cost free”○ RTO of at least 200ms ⇒ Average savings of at least 10ms

● See “Quicker QUIC Connections” for more details

Page 42: Uber mobility - High Performance Networking

Estimate Potential of FEC for UDP packets

● Sent 21 numbered packets to an ACKing echo server○ Create 21 distinct histograms, one histogram for each prefix of first-k packets

■ There are (effectively) a about 21 distinct histograms! (one per prefix)○ Increment the nth bucket if n out of k packets were ACKed

● Example: When sending first 17 packets, find probability of getting 17 vs 16 vs 15 vs … acks, by recording in a single histogram

○ If we get 16 or more acks, then a simple XOR FEC would recover (without retransmission)○ If we get 15 or more acks, then 2-packet-correcting FEC would recover.

Page 43: Uber mobility - High Performance Networking

Pacing significantly helps after about 12 packets are sent. (blue vs green line)

1-FEC reduces retransmits much more than 2-FEC would help

Page 44: Uber mobility - High Performance Networking

FEC Caveats: They are not good for everything!

● NACK based transmits are more efficient○ Don’t waste bandwidth on FEC when BDP is much smaller than total payload○ It is better to observe a loss, and *only* then retransmit

● Largest potential gains are for stream creation (client side)○ Client upload bandwidth is usually underutilized○ Payload is tiny (compresed HTTP GET?) , and it is all on the critical path for a response

● Smaller (but possible) gain potentials for tail loss probe via FEC packet○ Don’t use if tail latency is not critical, or bandwidth is at a premium

Page 45: Uber mobility - High Performance Networking

Summary:Client side histograms are very useful!!

● Creative application provides tremendous utility● Simple developer API provides wide-spread use

○ Developers will actually measure, before and after deploying!!!○ There are 2100 *active* histograms in a recent Chrome release!!!

● Mozilla and Chromium now have supporting code○ Open source is the source ;-)

● Features, such as Networking protocols, can greatly benefit from detailed instrumentation and analysis

Page 46: Uber mobility - High Performance Networking

Acknowledgements:Topics described were massive team efforts

● Thanks to the many members of the Google Chrome team for facilitating this work, and producing a Great Product to build upon!

● Special thanks to the QUIC Team!● Extra special shout-out for their support on several discussed topics to:

○ Mike Belshe, Roberto Peon: SPDY and pre-QUIC discussions○ Jeff Bailey: UDP echo test server rollout○ Raman Tenneti: UDP echo servers; QUIC team member○ Thanks to scores of Googlers for reviews and contributions to QUIC Design/Rationale!

● Thanks to Google, for providing a place to change the Internet world!○ Linus Upson: Thanks for providing Google Management Cover

Page 47: Uber mobility - High Performance Networking

gRPC: Universal RPCMakarand Dharmapurikar, Eric Anderson

Page 48: Uber mobility - High Performance Networking

History

Page 49: Uber mobility - High Performance Networking

Google has had 4 generations of internal RPC systems, called Stubby

● Used in all production applications and systems● Over 1010 RPCs per second, fleet-wide● Separate IDL; APIs for C++, Java, Python, Go● Tightly coupled with infrastructure

(infeasible to usable externally)

Very happy with Stubby

● Services available from any language● One integration point for load balancing, auth,

logging, tracing, accounting, billing, quota

gRPC History

Page 50: Uber mobility - High Performance Networking

Need solution for more connected world

● Cloud needs same high performance● Use same APIs from Mobile/Browser

gRPC is the next generation of Stubby.Goal: Usable everywhere

● Servers to Mobile to microcontrollers (IoT)● Awesome networks to horrible networks● Lots more languages/platforms● Must support pluggability● Open Source; developed in the open

gRPC History

Page 51: Uber mobility - High Performance Networking

Overview

Page 52: Uber mobility - High Performance Networking

● Android, iOS; 10+ languages○ Idiomatic, language-specific APIs

● Payload agnostic. We’ve implemented Protobuf● HTTP/2

○ Binary, multiplexing● QUIC support in process of open-sourcing (via Cronet)

○ No head-of-line blocking; 0 RTT● Layered and pluggable

○ Use-specific hooks. e.g., naming, LB○ Metadata. e.g., tracing, auth

● Streaming with flow control. No need for long polling!● Timeout and cancellation

gRPC Features

Page 53: Uber mobility - High Performance Networking

Key insights. Mobile is not that different

● Google already translating 1:1 REST, with Protobuf, to RPCs● Very high-performance services care about memory and CPU● Microcontrollers make mobile look beefy● High latency cross-continent. Home networks aren’t great. Black holes happen● Many features convenient everywhere, like tracing and streaming

Universal RPC - Mobile and cloud

● Mobile depends on Cloud● Developers should expect same great experience● Some unique needs, but not overly burdensome

○ Power optimization, platform-specific network integration (for resiliency)

gRPC and Mobile

Page 54: Uber mobility - High Performance Networking

Compatibility with ecosystem (current or planned)

● Supports generic HTTP/2 reverse proxies○ Nghttp2, HAProxy, Apache (untested), Nginx (in progress), GCLB (in progress)

● grpc-gateway○ A combined gRPC + REST server endpoint

● Name resolver, client-side load balancer○ etcd (Go only)

● Monitoring/Tracing○ Zipkin, Open Tracing (in progress)

gRPC: Universal RPC

Page 55: Uber mobility - High Performance Networking

ExampleHello, world!

Page 56: Uber mobility - High Performance Networking

service Greeter {

rpc SayHello (HelloRequest) returns (HelloReply);

}

message HelloRequest {

string name = 1;

}

message HelloReply {

string message = 1;

}

Example (IDL)

Page 57: Uber mobility - High Performance Networking

// Create shareable virtual connection (may have 0-to-many actual connections; auto-reconnects)

ManagedChannel channel = ManagedChannelBuilder.forAddress(host, port).build();

GreeterBlockingStub blockingStub = GreeterGrpc.newBlockingStub(channel);

HelloRequest request = HelloRequest.newBuilder().setName("world").build();

HelloReply response = blockingStub.sayHello(request);

// To release resources, as necessary

channel.shutdown();

Example (Client)

Page 58: Uber mobility - High Performance Networking

Server server = ServerBuilder.forPort(port)

.addService(new GreeterImpl())

.build()

.start();

server.awaitTermination();

class GreeterImpl extends GreeterGrpc.AbstractGreeter {

@Override

public void sayHello(HelloRequest req, StreamObserver<HelloReply> responseObserver) {

HelloReply reply = HelloReply.newBuilder().setMessage("Hello, " + req.getName()).build();

responseObserver.onNext(reply);

responseObserver.onCompleted();

}

}

Example (Server)

Page 59: Uber mobility - High Performance Networking

Some of the adopters

Page 60: Uber mobility - High Performance Networking

Site: grpc.ioMailing List: [email protected] Handle: @grpcio

Page 61: Uber mobility - High Performance Networking

Amazing mobile data pipelinesKarthik Ramgopal

Page 62: Uber mobility - High Performance Networking

About us

▪ World’s largest professional social network.

▪ 433M members worldwide.

▪ > 50% members access LinkedIn on mobile.

▪ Huge growth in India and China.

Page 63: Uber mobility - High Performance Networking

About me

▪ Mobile Infrastructure Engineer

▪ Android platform and Sitespeed lead

Page 64: Uber mobility - High Performance Networking

LinkedIn app portfolio▪ LinkedIn Flagship▪ Lookup▪ Pulse▪ Job Seeker▪ Elevate▪ Groups▪ Sales Navigator▪ Recruiter▪ Student Job Seeker▪ Lynda.com

Page 65: Uber mobility - High Performance Networking

The leaky pipe

▪ Mobile Networks are flaky

▪ Speeds range from 80Kbps (GPRS/India) to over 10 Mbps (LTE/US)

▪ Last mile latency

▪ Routing/peering issues

▪ Frequent disconnects and degradation is common

Page 66: Uber mobility - High Performance Networking

Diversity in devices

▪ Fragmented Android ecosystem. Older iPhones prevalent in emerging markets.

▪ Lowest end devices have 256M of RAM and single core CPUs.

Page 67: Uber mobility - High Performance Networking

How do we optimize?

▪ Network connect

▪ Server time

▪ Response download/upload

▪ Parsing and caching

▪ Robust client side infrastructure

▪ Measure, measure and measure

Page 68: Uber mobility - High Performance Networking

Network connect

▪ Sprinkle PoPs and CDNs close to members

▪ Early initialization

▪ Custom DNS cache

▪ SSL session cache

▪ Retries and timeouts tuned by network type

Page 69: Uber mobility - High Performance Networking

Response download/upload▪ Native multiplexing using SPDY.

▪ Custom dispatcher/response processor

▪ Content resumption

▪ Rest.li multiplexer

▪ Progressive JPEG for images

Page 70: Uber mobility - High Performance Networking

Payload size reduction

▪ Delta sync

▪ Brotli compression

▪ SDCH

Page 71: Uber mobility - High Performance Networking

Parsing

▪ Stream parse and decode

▪ Schema aware JSON parser

▪ Custom image decoder

Page 72: Uber mobility - High Performance Networking

Caching

▪ Traditional request/response caches are passé.

▪ Fission: Decompose and cache

▪ Memory mapped disk cache

▪ No memory cache

Page 73: Uber mobility - High Performance Networking

Thank You!Questions?