cray inc. measuring congestion in high-performance ... › ... › nsdi20_slides_jha.pdf ·...

24
Measuring Congestion in High-Performance Datacenter Interconnects Saurabh Jha 1 , Archit Patke 1 , Jim Brandt 2 , Ann Gentile 2 , Benjamin Lim 1 , Mike Showerman 1,3 , Greg Bauer 1,3 , Larry Kaplan 4 , Zbigniew T. Kalbarczyk 1 and William T. Kramer 1,3 , Ravishankar K. Iyer 1,3 2 Sandia National Laboratories 3 National Center for Supercomputing Applications 1 University of Illinois at Urbana-Champaign 4 Cray Inc.

Upload: others

Post on 03-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Measuring Congestion in High-Performance Datacenter Interconnects

Saurabh Jha1, Archit Patke1, Jim Brandt2, AnnGentile2, Benjamin Lim1, Mike Showerman1,3, GregBauer1,3, Larry Kaplan4, Zbigniew T. Kalbarczyk1 andWilliam T. Kramer1,3, Ravishankar K. Iyer1,3

2Sandia National Laboratories

3National Center for Supercomputing Applications

1University of Illinois at Urbana-Champaign

4Cray Inc.

Page 2: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

1

WRF: Largest Weather forecast simulation

HPC solves critical science, finance, AI, and other problems

Hurricane detector using AI(Courtesy: Nvidia)

High-Performance Computing (HPC)

Page 3: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

2

WRF: Largest Weather forecast simulation

HPC solves critical science, finance, AI, and other problems

Hurricane detector using AI(Courtesy: Nvidia)

High-Performance Computing (HPC)

HPC on Cloud HPC in Academic and National Labs

NCSA (UIUC) Oakridge National Lab

Page 4: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

3

High-Performance Computing (HPC)

High-speed Networks(HSN)

• Low per-hop latency [1][2]• Low tail-latency variation • High bisection bandwidth

[1] https://www.nextplatform.com/2018/03/27/in-modern-datacenters-the-latency-tail-wags-the-network-dog/[2] https://blog.mellanox.com/2017/05/microsoft-enhanced-azure-cloud-efficiency/

Page 5: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Networking and Performance Variation

4

Despite the low-latency, high-speed networks (HSN) are susceptible to high congestion

Such congestion can cause up to 2-4X application performance variation in production settings

150200250300350400450500550

0 2 4 6 8 10 12 14

Runtime(min)

Run Number

Up to 𝟏. 𝟖𝟗× slowdown compared to median runtime of 282 minutes

1000-node production molecular dynamics code.

256-node benchmark app (AMR)

Up to 4× slowdown compared to the median loop iteration time of 2.5 sec

Page 6: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Networking and Performance Variation

5

Despite low-latency, high-speed networks (HSN) susceptible to high congestion

Such congestion can lead to up to 2-3X application performance variation in production settings

Questions:• How often system/applications are experiencing congestion ? [Characterization]

• What are the culprits behind congestion? [Diagnostics] • How to avoid and mitigate effects of congestion ? [Network and System Design]

Page 7: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Highlights• Created data mining and ML-driven methodology and associated framework for

• Characterizing network design and congestion problems using empirical data• Identifying factors leading to the congestion on a live system• Checking if the application slowdown was indeed due to congestion

• Empirical evaluation of a real-world large-scale supercomputer: Blue Waters at NCSA• Largest 3D Torus network in the world• 5 months of operational data• 815,006 unique application runs • 70 PB of data injected into the network

• Largest dataset on congestion (first on HPC networks)• Dataset (51 downloads and counting!) and code released

6

Page 8: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Key Findings• HSN congestion is the biggest contributor to app performance variation

• Continuous presence of high congestion regions• Long lived congestion (may persist for >23 hours)

• Default congestion mitigation mechanism have limited efficacy• Only 8 % (261 of 3390 cases) of high congestion cases found using our framework were detected and

acted by default congestion mitigation algorithm• In ~30% of the cases the default congestion mitigation algorithm was unable to alleviate congestion

• Congestion patterns and their tracking enables identification of culprits behind congestion• critical to system and application performance improvements• E.g., intra-app congestion can be fixed by changing allocation and mapping strategies

7

congestion region

Page 9: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Congestion in credit-based flow control Network• Focus on evaluation of credit-based flow control transmission protocol

• Flit is the smallest unit of datum that can be transferred

• Flits are not dropped during congestion

• Backpressure (credits) provides congestion control

linkSwitch 1 Switch 2

FLIT

If credit > 0, flit can be sent

FLITFLIT

Available Credits: 3210

FLIT

If credit = 0, flit cannot be sent8

Page 10: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Measuring Congestion

9

linkSwitch 1 Switch 2

Indicates flit waiting (no credit available, allocated buffer full)

Indicates link is transmitting

Time

𝑷𝑻𝒔𝒊 = 𝟏𝟎𝟎 ×𝑻𝒔𝒊

𝑻𝒊= 100 ×

512

= 41.67 %

Congestion measured using Percent time stalled (𝑃!")

𝑻𝒊: # network cycles in 𝑖$% measurement interval (fixed value)𝑻𝒔𝒊 : # total cycles the link was stalled in 𝑇& (i.e., flit was ready to be sent but no credits available.)

12 cycles

Page 11: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

link 1Switch 1 Switch 2

FLIT

If credit = 0, flit cannot be sent

FLIT

FLIT

Available Credits: 0

Congestion in credit-based flow control Network

Switch 3

Switch 4

link 2

link 3

Insight: Congestion spreads locally (i.e., fans out from an origin point to other senders).

10

Page 12: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

link 1Switch 1 Switch 2

FLIT

If credit = 0, flit cannot be sent

FLIT

FLIT

Available Credits: 0

Congestion in credit-based flow control Network

Switch 3

Switch 4

link 2

link 3

Insight: Congestion spreads locally (i.e., fans out from an origin point to other senders).

Congestion Visualization

PTS

(%)

11

Page 13: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

New unit for measuring congestionMeasure congestion in terms of regions, their size and severity

Unsupervised clustering

distance is small:dδ(x,y) ≤ δ

stall difference is small:dλ(xs −ys) ≤ θp

Low: 5% < 𝑃!" ≤ 15%

Med: 15% < 𝑃!" ≤ 25%

High: 25% < 𝑃!"

Neg: 0% < 𝑃!" ≤ 5%

Raw Congestion Visualization

PTS

(%)

12

Page 14: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

13

Congestion Regions Proxy for Performance Evaluation

Congestion-Informed Segmentation algorithm

Congestion Regions (CRs) captures relation between congestion severity and application slowdown and therefore can be used for live forensics and debugging!

(details in paper)

150

200

250

300

350

400

450

500

550

0 5 10 15 20 25 30 35

Execu

tion T

ime

(min

s)

Max of average PTS across all regions overlapping the application topology

NegLowMedHigh

Low: 5% < 𝑃'( ≤ 15%

Med: 15% < 𝑃'( ≤ 25%

High: 25% < 𝑃'(

1000-node production molecular dynamics (NAMD) code.

Page 15: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

14

Infiniband network (IB)

Metadata Targets (MDT) [1–10]

Object Storage Targets (OST)[1—N]

Metadata Servers (MDS)

Object Storage Servers (OSS)

MDS 1 MDS 2 OSS 1 OSS 2

Failover pair Failover pair

Login nodes

Import export nodes 3-D Torus Cray Gemini Network

⇥N<latexit sha1_base64="ptZDU7z4qZPBBidzKYRMGzDZSmQ=">AAAB73icdVBNS8NAEN3Ur1q/qh69LBbBU0hqaOut6MWTVLAf0Iay2W7apZtN3J0IpfRPePGgiFf/jjf/jZu2goo+GHi8N8PMvCARXIPjfFi5ldW19Y38ZmFre2d3r7h/0NJxqihr0ljEqhMQzQSXrAkcBOskipEoEKwdjC8zv33PlOaxvIVJwvyIDCUPOSVgpE4PeMQ0vu4XS47tld2a5+GMVM6rtQWpVCvYtZ05SmiJRr/43hvENI2YBCqI1l3XScCfEgWcCjYr9FLNEkLHZMi6hkpi1vjT+b0zfGKUAQ5jZUoCnqvfJ6Yk0noSBaYzIjDSv71M/MvrphDW/CmXSQpM0sWiMBUYYpw9jwdcMQpiYgihiptbMR0RRSiYiAomhK9P8f+kVbbdM9u58Ur1i2UceXSEjtEpclEV1dEVaqAmokigB/SEnq0769F6sV4XrTlrOXOIfsB6+wQueJAS</latexit>

Cray Gemini Switch

• Topology: 3D Torus (24x24x24)• Compute nodes : 28K nodes• Avg. Bisection Bandwidth: 17550 GB/sec• Per hop latency: 105 ns [1]

Courtesy: Cray Inc. (HP)

System, Monitors, and Datasets

Blue Waters Networks

[1] https://wiki.alcf.anl.gov/parts/images/2/2c/Gemini-whitepaper.pdf

Page 16: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

15

Network Failures

Performance counters

Workload

15 TB

100 GB

8 GB

Cray Network Monitors

Lightweight Distributed Metric service (LDMS) [2]

Scheduler

Characterization(5 months)

Live Analytics(60 seconds)

~

40 MB

55 MB

[2] A. Agelastos et al. Lightweight Distributed Metric Service: A Scalable Infrastructure for Continuous Monitoring of Large-scale Computing Systems and Applications. In SC14: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 154–165, 2014.

Infiniband network (IB)

Metadata Targets (MDT) [1–10]

Object Storage Targets (OST)[1—N]

Metadata Servers (MDS)

Object Storage Servers (OSS)

MDS 1 MDS 2 OSS 1 OSS 2

Failover pair Failover pair

Login nodes

Import export nodes 3-D Torus Cray Gemini Network

⇥N<latexit sha1_base64="ptZDU7z4qZPBBidzKYRMGzDZSmQ=">AAAB73icdVBNS8NAEN3Ur1q/qh69LBbBU0hqaOut6MWTVLAf0Iay2W7apZtN3J0IpfRPePGgiFf/jjf/jZu2goo+GHi8N8PMvCARXIPjfFi5ldW19Y38ZmFre2d3r7h/0NJxqihr0ljEqhMQzQSXrAkcBOskipEoEKwdjC8zv33PlOaxvIVJwvyIDCUPOSVgpE4PeMQ0vu4XS47tld2a5+GMVM6rtQWpVCvYtZ05SmiJRr/43hvENI2YBCqI1l3XScCfEgWcCjYr9FLNEkLHZMi6hkpi1vjT+b0zfGKUAQ5jZUoCnqvfJ6Yk0noSBaYzIjDSv71M/MvrphDW/CmXSQpM0sWiMBUYYpw9jwdcMQpiYgihiptbMR0RRSiYiAomhK9P8f+kVbbdM9u58Ur1i2UceXSEjtEpclEV1dEVaqAmokigB/SEnq0769F6sV4XrTlrOXOIfsB6+wQueJAS</latexit>

Cray Gemini Switch

Monitoring logs

• Topology: 3D Torus (24x24x24)• Compute nodes : 28K nodes• Avg. Bisection Bandwidth: 17550 GB/sec• Per hop latency: 105 ns [1]

Courtesy: Cray Inc. (HP)

System, Monitors, and Datasets

Blue Waters Networks

[1] https://wiki.alcf.anl.gov/parts/images/2/2c/Gemini-whitepaper.pdf

Page 17: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

1. Congestion is the biggest contributor to app performance variation

Long-lived congestion• Congestion Region can persist up to ~24

hours (median: 9.7 hours)

• Congestion Region count decreases with increasing duration

16

20000

40000

60000

0

200

400

600

800

1000

1200

1400

1600

#CRs

CR Duration (mins)

LowMedHigh

Low: 5% < 𝑃)* ≤ 15%Med: 15% < 𝑃)* ≤ 25%High: 25% < 𝑃)*

Page 18: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

2. Limited efficacy of default congestion detection and mitigation mechanisms

17

Before congestion mitigation After congestion mitigation

Default system congestion detection and mitigation

Median: 7 hours

• #congestion mitigating triggered : 261

• Median time between events: 7 hours

• Failed to alleviate congestion in 29.8% cases

Default mitigation throttles all NICs such that

aggregate traffic injection bandwidth across all nodes < single node bandwidth ejection

Page 19: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

2. Limited efficacy of default congestion detection and mitigation algorithms

18

Only 8 % (261 of 3390 cases) of high congestion cases found by Monet were detected and acted by default congestion mitigation algorithm

Default system congestion detection and mitigation

Median: 7 hours, #events: 261

Monet detection

Median: 58 minutes, #events: 3390

• Default congestion mitigating triggered : 261

• Median time between events: 7 hours

• Failed to alleviate congestion in 29.8% of the cases

Page 20: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

3. Congestion patterns and their tracking enables identification of culprits behind congestion

19

App traffic pattern changes

System load changes

Link failure

[1] J Enos et al. Topology-aware job scheduling strategies for torus networks. In Proc. Cray User Group, 2014.

• Network design and congestion-aware scheduling

• E.g., topology-aware scheduling [1] improved system throughput by 56% by tuning resource allocation strategies

Page 21: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

3. Congestion patterns and their tracking enables identification of culprits behind congestion

20

App traffic pattern changes

System load changes

Link failure

[2] Galvez et al. Automatic topology mapping of diverse large-scale parallel applications. In Proceedings of the International Conference on Supercomputing, ICS ’17, pages 17:1–17:10, New York, NY, USA, 2017. ACM.

• Node mapping within the allocation reduces intra-app congestion

• E.g., TopoMapping [2] for finding optimal process rank mapping for the allocated resource

Page 22: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

21

Conclusion

• Developed and validated the proposed methodology on production datasets

• Code and dataset online (51 downloads and counting!)• https://databank.illinois.edu/datasets/IDB-2921318• https://github.com/CSLDepend/monet

Page 23: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

22

Future Work

Congestion Visualization on a production Cray Aries (DragonFly Network)

Developing workload-aware high-speed networks • Inferring and meeting application demands • Optimizing congestion control and routing

strategies

Congestion avoidance and mitigation is an ongoing problem !

Meet us at the poster session!

Wednesday 6:30 PM - 8:00 PMCypress Room

Page 24: Cray Inc. Measuring Congestion in High-Performance ... › ... › nsdi20_slides_jha.pdf · Measuring Congestion in High-Performance Datacenter Interconnects SaurabhJha1,ArchitPatke1,JimBrandt2,Ann

Questions?

23