software libraries and middleware for exascale systems · 2020-01-06 · • programming models...

85
Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Keynote Talk at SEA Symposium (April ‘18) by

Upload: others

Post on 02-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities

Dhabaleswar K. (DK) Panda

The Ohio State UniversityE-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Keynote Talk at SEA Symposium (April ‘18)

by

Page 2: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 2Network Based Computing Laboratory

High-End Computing (HEC): Towards Exascale

Expected to have an ExaFlop system in 2019-2020!

100 PFlops in 2016

1 EFlops in 2019-2020?

Page 3: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 3Network Based Computing Laboratory

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Increasing Usage of HPC, Big Data and Deep Learning

Convergence of HPC, Big Data, and Deep Learning!

Increasing Need to Run these applications on the Cloud!!

Page 4: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 4Network Based Computing Laboratory

Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Physical Compute

Page 5: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 5Network Based Computing Laboratory

Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Page 6: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 6Network Based Computing Laboratory

Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Page 7: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 7Network Based Computing Laboratory

Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Spark Job

Hadoop Job Deep LearningJob

Page 8: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 8Network Based Computing Laboratory

• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)

– Memcached is also used for Web 2.0

• Deep Learning– Caffe, CNTK, TensorFlow, and many more

• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers

HPC, Big Data, Deep Learning, and Cloud

Page 9: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 9Network Based Computing Laboratory

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory MemoryLogical shared memory

Shared Memory Model

SHMEM, DSMDistributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving importance

Page 10: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 10Network Based Computing Laboratory

Partitioned Global Address Space (PGAS) Models• Key features

- Simple shared memory abstractions

- Light weight one-sided communication

- Easier to express irregular communication

• Different approaches to PGAS- Languages

• Unified Parallel C (UPC)

• Co-Array Fortran (CAF)

• X10

• Chapel

- Libraries• OpenSHMEM

• UPC++

• Global Arrays

Page 11: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 11Network Based Computing Laboratory

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in MPI/PGAS based on communication characteristics

• Benefits:– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

Kernel 1MPI

Kernel 2MPI

Kernel 3MPI

Kernel NMPI

HPC Application

Kernel 2PGAS

Kernel NPGAS

Page 12: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 12Network Based Computing Laboratory

Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and Omni-Path)

Multi-/Many-coreArchitectures

Accelerators(GPU and FPGA)

MiddlewareCo-Design

Opportunities and

Challenges across Various

Layers

PerformanceScalabilityResilience

Communication Library or Runtime for Programming ModelsPoint-to-point

CommunicationCollective

CommunicationEnergy-

AwarenessSynchronization

and LocksI/O and

File SystemsFault

Tolerance

Page 13: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 13Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)– Scalable job start-up

• Scalable Collective communication– Offload– Non-blocking– Topology-aware

• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node

• Support for efficient multi-threading• Integrated Support for GPGPUs and Accelerators• Fault-tolerance/resiliency• QoS support for communication and I/O• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,

MPI+UPC++, CAF, …)• Virtualization • Energy-Awareness

Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale

Page 14: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 14Network Based Computing Laboratory

• Extreme Low Memory Footprint– Memory per core continues to decrease

• D-L-A Framework

– Discover• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job• Node architecture, Health of network and node

– Learn• Impact on performance and scalability• Potential for failure

– Adapt• Internal protocols and algorithms• Process mapping• Fault-tolerance solutions

– Low overhead techniques while delivering performance, scalability and fault-tolerance

Additional Challenges for Designing Exascale Software Libraries

Page 15: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 15Network Based Computing Laboratory

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,875 organizations in 86 countries

– More than 460,000 (> 0.46 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘17 ranking)• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China

• 9th, 556,104 cores (Oakforest-PACS) in Japan

• 12th, 368,928-core (Stampede2) at TACC

• 17th, 241,108-core (Pleiades) at NASA

• 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

Page 16: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 16Network Based Computing Laboratory

0

50000

100000

150000

200000

250000

300000

350000

400000

450000

500000Se

p-04

Feb-

05

Jul-0

5

Dec-

05

May

-06

Oct

-06

Mar

-07

Aug-

07

Jan-

08

Jun-

08

Nov

-08

Apr-

09

Sep-

09

Feb-

10

Jul-1

0

Dec-

10

May

-11

Oct

-11

Mar

-12

Aug-

12

Jan-

13

Jun-

13

Nov

-13

Apr-

14

Sep-

14

Feb-

15

Jul-1

5

Dec-

15

May

-16

Oct

-16

Mar

-17

Aug-

17

Jan-

18

Num

ber o

f Dow

nloa

ds

Timeline

MV

0.9.

4

MV2

0.9

.0

MV2

0.9

.8

MV2

1.0

MV

1.0

MV2

1.0.

3

MV

1.1

MV2

1.4

MV2

1.5

MV2

1.6

MV2

1.7

MV2

1.8

MV2

1.9

MV2

-GD

R 2.

0b

MV2

-MIC

2.0

MV2

-GD

R 2

.3a

MV2

-X2.

3bM

V2Vi

rt 2

.2

MV2

2.3

rc1

OSU

INAM

0.9

.3

MVAPICH2 Release Timeline and Downloads

Page 17: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 17Network Based Computing Laboratory

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-point

Primitives

Collectives Algorithms

Energy-

Awareness

Remote Memory Access

I/O and

File Systems

Fault

ToleranceVirtualization Active

MessagesJob Startup

Introspection & Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODPSR-IOV

Multi Rail

Transport MechanismsShared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

XPMEM*

Page 18: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 18Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication– Scalable Start-up– Optimized Collectives using SHArP and Multi-Leaders– Optimized CMA-based Collectives– Upcoming Optimized XPMEM-based Collectives– Performance Engineering with MPI-T Support– Integrated Network Analysis and Monitoring

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 19: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 19Network Based Computing Laboratory

One-way Latency: MPI over IB with MVAPICH2

00.20.40.60.8

11.21.41.61.8

2 Small Message Latency

Message Size (bytes)

Late

ncy

(us)

1.11

1.19

0.98

1.15

1.04

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

0

20

40

60

80

100

120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Large Message Latency

Message Size (bytes)

Late

ncy

(us)

Page 20: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 20Network Based Computing Laboratory

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

Bandwidth: MPI over IB with MVAPICH2

0

5000

10000

15000

20000

25000TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Bidirectional Bandwidth

Band

wid

th

(MBy

tes/

sec)

Message Size (bytes)

22,564

12,16121,983

6,228

24,136

0

2000

4000

6000

8000

10000

12000

14000 Unidirectional Bandwidth

Band

wid

th

(MBy

tes/

sec)

Message Size (bytes)

12,590

3,373

6,356

12,35812,366

Page 21: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 21Network Based Computing Laboratory

Startup Performance on KNL + Omni-Path

0

5

10

15

20

25

64 128

256

512 1K 2K 4K 8K 16K

32K

64K

Tim

e Ta

ken

(Sec

onds

)

Number of Processes

MPI_Init & Hello World - Oakforest-PACS

Hello World (MVAPICH2-2.3a)

MPI_Init (MVAPICH2-2.3a)

• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)• All numbers reported with 64 processes per node

5.8s

21s

22s

New designs available since MVAPICH2-2.3a and as patch for SLURM 15, 16, and 17

Page 22: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 22Network Based Computing Laboratory

0

0.05

0.1

0.15

0.2

(4,28) (8,28) (16,28)

Late

ncy

(sec

onds

)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP13%

Mesh Refinement Time of MiniAMR

Advanced Allreduce Collective Designs Using SHArP

12%

0

0.1

0.2

0.3

0.4

(4,28) (8,28) (16,28)

Late

ncy

(sec

onds

)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP

Avg DDOT Allreduce time of HPCG

SHArP Support is available since MVAPICH2 2.3a

M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17.

Page 23: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 23Network Based Computing Laboratory

Performance of MPI_Allreduce On Stampede2 (10,240 Processes)

0

50

100

150

200

250

300

4 8 16 32 64 128 256 512 1024 2048 4096

Late

ncy

(us)

Message SizeMVAPICH2 MVAPICH2-OPT IMPI

0200400600800

100012001400160018002000

8K 16K 32K 64K 128K 256KMessage Size

MVAPICH2 MVAPICH2-OPT IMPI

OSU Micro Benchmark 64 PPN

2.4X

• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4XM. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17. Available in MVAPICH2-X 2.3b

Page 24: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 24Network Based Computing Laboratory

Optimized CMA-based Collectives for Large Messages

1

10

100

1000

10000

100000

10000001K 2K 4K 8K 16

K32

K64

K12

8K25

6K51

2K 1M 2M 4MMessage Size

KNL (2 Nodes, 128 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

Late

ncy

(us)

1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K 16K

32K

64K

128K

256K

512K 1M 2M

Message Size

KNL (4 Nodes, 256 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K 16K

32K

64K

128K

256K

512K 1M

Message Size

KNL (8 Nodes, 512 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)

• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and Alltoall)

~ 2.5xBetter

~ 3.2xBetter

~ 4xBetter

~ 17xBetter

S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist

Performance of MPI_Gather on KNL nodes (64PPN)

Available in MVAPICH2-X 2.3b

Page 25: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 25Network Based Computing Laboratory

Shared Address Space (XPMEM)-based Collectives Design

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(us)

Message Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-Opt

OSU_Allreduce (Broadwell 256 procs)

• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2

• Offloaded computation/communication to peers ranks in reduction collective operation

• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce

73.2

1.8X

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4MMessage Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-Opt

OSU_Reduce (Broadwell 256 procs)

4X

36.1

37.9

16.8

J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.

Will be available in future

Page 26: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 26Network Based Computing Laboratory

● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables

● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU

● Control the runtime’s behavior via MPI Control Variables (CVARs)● Introduced support for new MPI_T based CVARs to MVAPICH2

○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE

● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for uninstrumented applications

Performance Engineering Applications using MVAPICH2 and TAU

VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf

S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU, Euro MPI, 2017 [Best Paper Award]

Page 27: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 27Network Based Computing Laboratory

Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the

MPI runtime– http://mvapich.cse.ohio-state.edu/tools/osu-inam/

• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes

• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication– Point-to-Point, Collectives and RMA

• Ability to filter data based on type of counters using “drop down” list

• Remotely monitor various metrics of MPI processes at user specified granularity

• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X

• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes

• OSU INAM v0.9.3 released on 03/16/2018– Enhance INAMD to query end nodes based on command line option

– Add a web page to display size of the database in real-time

– Enhance interaction between the web application and SLURM job launcher for increased portability– Improve packaging of web application and daemon to ease installation

Page 28: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 28Network Based Computing Laboratory

OSU INAM Features

• Show network topology of large clusters• Visualize traffic pattern on different links• Quickly identify congested links/links in error state• See the history unfold – play back historical state of the network

Comet@SDSC --- Clustered View

(1,879 nodes, 212 switches, 4,377 network links)Finding Routes Between Nodes

Page 29: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 29Network Based Computing Laboratory

OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes)

• Job level view• Show different network metrics (load, error, etc.) for any live job• Play back historical data for completed jobs to identify bottlenecks

• Node level view - details per process or per node• CPU utilization for each rank/node• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)• Network metrics (e.g. XmitDiscard, RcvError) per rank/node

Estimated Process Level Link Utilization

• Estimated Link Utilization view• Classify data flowing over a network link at

different granularity in conjunction with MVAPICH2-X 2.2rc1

• Job level and• Process level

Page 30: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 30Network Based Computing Laboratory

• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +

UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 31: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 31Network Based Computing Laboratory

MVAPICH2-X for Hybrid MPI + PGAS Applications

• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed

– Consumes more network resource

• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF– Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu

Page 32: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 32Network Based Computing Laboratory

Application Level Performance with Graph500 and SortGraph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012

05

101520253035

4K 8K 16K

Tim

e (s

)

No. of Processes

MPI-SimpleMPI-CSCMPI-CSRHybrid (MPI+OpenSHMEM)

13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design• 8,192 processes

- 2.4X improvement over MPI-CSR- 7.6X improvement over MPI-Simple

• 16,384 processes- 1.5X improvement over MPI-CSR- 13X improvement over MPI-Simple

J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.

Sort Execution Time

0500

10001500200025003000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (s

econ

ds)

Input Data - No. of Processes

MPI Hybrid

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min- Hybrid – 1172 sec; 0.36 TB/min- 51% improvement over MPI-design

Page 33: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 33Network Based Computing Laboratory

Optimized OpenSHMEM with AVX and MCDRAM: Application Kernels Evaluation

Heat Image Kernel

• On heat diffusion based kernels AVX-512 vectorization showed better performance• MCDRAM showed significant benefits on Heat-Image kernel for all process counts.

Combined with AVX-512 vectorization, it showed up to 4X improved performance

1

10

100

1000

16 32 64 128

Tim

e (s

)

No. of processes

KNL (Default)KNL (AVX-512)KNL (AVX-512+MCDRAM)Broadwell

Heat-2D Kernel using Jacobi method

0.1

1

10

100

16 32 64 128

Tim

e (s

)

No. of processes

KNL (Default)KNL (AVX-512)KNL (AVX-512+MCDRAM)Broadwell

Page 34: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 34Network Based Computing Laboratory

• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +

UPC, CAF, UPC++, …) • Integrated Support for GPGPUs

– CUDA-aware MPI– GPUDirect RDMA (GDR) Support– Support for Streaming Applications

• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 35: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 35Network Based Computing Laboratory

At Sender:

At Receiver:MPI_Recv(r_devbuf, size, …);

insideMVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 36: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 36Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA-based inter-node point-to-point communication

(GPU-GPU, GPU-Host and Host-GPU)• High performance intra-node point-to-point communication for multi-GPU

adapters/node (GPU-GPU, GPU-Host and Host-GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node

communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point-to-point and collective communication from

GPU device buffers• Unified memory

Page 37: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 37Network Based Computing Laboratory

0

2000

4000

6000

1 2 4 8 16 32 64 128

256

512 1K 2K 4K

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3a

01000200030004000

1 2 4 8 16 32 64 128

256

512 1K 2K 4K

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3a

0

10

20

300 1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

GPU-GPU Inter-node Latency

MV2-(NO-GDR) MV2-GDR-2.3a

MVAPICH2-GDR-2.3aIntel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

10x

9x

Optimized MVAPICH2-GDR Design

1.88us11X

Page 38: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 38Network Based Computing Laboratory

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Aver

age

Tim

e St

eps p

er

seco

nd (T

PS)

Number of Processes

MV2 MV2+GDR

0500

100015002000250030003500

4 8 16 32Aver

age

Tim

e St

eps p

er

seco

nd (T

PS)

Number of Processes

64K Particles 256K Particles

2X2X

Page 39: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 39Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96Nor

mal

ized

Exec

utio

n Ti

me

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

00.20.40.60.8

11.2

4 8 16 32

Nor

mal

ized

Exec

utio

n Ti

me

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content/tasks/operational/meteoSwiss/

Page 40: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 40Network Based Computing Laboratory

• Streaming applications on HPC systems

1. Communication (MPI)• Broadcast-type operations

2. Computation (CUDA)• Multiple GPU nodes as workers

Streaming ApplicationsData Source

SenderHPC resources for real-time analytics

Real-time streaming

WorkerCPUGPU

GPU

WorkerCPUGPU

GPU

WorkerCPUGPU

GPU

WorkerCPUGPU

GPU

WorkerCPUGPU

GPU

Data streaming-like broadcast operations

Page 41: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 41Network Based Computing Laboratory

IB HCA

CPU

GPU

Source

IB Switch

Header

Data

IB HCA

CPU

GPU

Destination 1Header

Data

IB HCA

CPU

GPU

Destination NHeader

Data

1. IB Gather + GDR Read2. IB Hardware Multicast3. IB Scatter + GDR Write

• For GPU-resident data, using– GPUDirect RDMA (GDR)

– InfiniBand Hardware Multicast (IB-MCAST)

• Overhead– IB UD limit

– GDR limit

Hardware Multicast-based Broadcast

A. Venkatesh, H. Subramoni, K. Hamidouche, and D. K. Panda, “A High Performance Broadcast Design with Hardware Multicast and GPUDirect RDMA for Streaming Applications on InfiniBand Clusters,” in HiPC 2014, Dec 2014.

Page 42: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 42Network Based Computing Laboratory

0

10

20

30

40

50

60

1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K

Late

ncy

(μs)

Message Size (Bytes)

MCAST-GDR-OPT MCAST-GDR

• IB-MCAST + GDR + Topology-aware IPC-based schemes – Up to 58% and 79% reduction

for small and large messages

Streaming Benchmark @ CSCS (88 GPUs)

58%

0

2000

4000

6000

8000

10000

12000

32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(μs)

Message Size (Bytes)

MCAST-GDR-OPT MCAST-GDR

79%

C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh, B. Elton, and D. K. Panda, "Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters, " SBAC-PAD'16, Oct. 26-28, 2016.

Page 43: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 43Network Based Computing Laboratory

• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +

UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 44: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 44Network Based Computing Laboratory

0

0.5

1

1.5

0 1 2 4 8 16 32 64 128 256 512 1K 2K

Late

ncy

(us)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Intra-node Point-to-Point Performance on OpenPower

Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA

Intra-Socket Small Message Latency Intra-Socket Large Message Latency

Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth

0.30us0

20

40

60

80

4K 8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

10000

20000

30000

40000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

20000

40000

60000

80000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Page 45: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 45Network Based Computing Laboratory

0

1

2

3

4

0 1 2 4 8 16 32 64 128 256 512 1K 2K

Late

ncy

(us)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Inter-node Point-to-Point Performance on OpenPower

Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

0

50

100

150

200

4K 8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

5000

10000

15000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

10000

20000

30000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Page 46: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 46Network Based Computing Laboratory

05

101520

1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

INTRA-NODE LATENCY (SMALL)

INTRA-SOCKET(NVLINK) INTER-SOCKET

MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)

010203040

1 4 16 64 256 1K 4K 16K

64K

256K 1M 4M

Band

wid

th (G

B/se

c)

Message Size (Bytes)

INTRA-NODE BANDWIDTH

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

2

4

6

8

1 4 16 64 256 1K 4K 16K

64K

256K 1M 4M

Band

wid

th (G

B/se

c)

Message Size (Bytes)

INTER-NODE BANDWIDTH

Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and 4X-FDR InfiniBand Inter-connect

0

200

400

16K 32K 64K 128K256K512K 1M 2M 4M

Late

ncy

(us)

Message Size (Bytes)

INTRA-NODE LATENCY (LARGE)

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

10

20

30

1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

INTER-NODE LATENCY (SMALL)

0200400600800

1000

Late

ncy

(us)

Message Size (Bytes)

INTER-NODE LATENCY (LARGE)

Intra-node Bandwidth: 33.2 GB/sec (NVLINK)Intra-node Latency: 13.8 us (without GPUDirectRDMA)

Inter-node Latency: 23 us (without GPUDirectRDMA) Inter-node Bandwidth: 6 GB/sec (FDR)Available in MVAPICH2-GDR 2.3a

Page 47: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 47Network Based Computing Laboratory

0

10

20

30

40

4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(us)

MVAPICH2-GDR-NextSpectrumMPI-10.1.0.2OpenMPI-3.0.0

Scalable Host-based Collectives with CMA on OpenPOWER (Intra-node Reduce & AlltoAll)

(Nod

es=1

, PPN

=20)

0

50

100

150

200

4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(us)

MVAPICH2-GDR-NextSpectrumMPI-10.1.0.2OpenMPI-3.0.0

(Nod

es=1

, PPN

=20)

Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively

3.6X

Alltoall

Reduce

5.2X

3.2X3.3X

0

400

800

1200

1600

2000

8K 16K 32K 64K 128K 256K 512K 1M

Late

ncy

(us)

MVAPICH2-GDR-NextSpectrumMPI-10.1.0.2OpenMPI-3.0.0

(Nod

es=1

, PPN

=20)

0

2500

5000

7500

10000

8K 16K 32K 64K 128K 256K 512K 1M

Late

ncy

(us)

MVAPICH2-GDR-NextSpectrumMPI-10.1.0.2OpenMPI-3.0.0

(Nod

es=1

, PPN

=20)

3.2X

1.4X

1.3X1.2X

Page 48: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 48Network Based Computing Laboratory

0

1000

2000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

3X0

2000

4000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

Message Size

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

34%

0

1000

2000

3000

4000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

Message Size

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

0

1000

2000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

Optimized All-Reduce with XPMEM on OpenPOWER(N

odes

=1, P

PN=2

0)

Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch

• Optimized MPI All-Reduce Design in MVAPICH2– Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node

2X

(Nod

es=2

, PPN

=20)

4X48%

3.3X

2X

2X

Page 49: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 49Network Based Computing Laboratory

0

1

2

3

0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(us)

MVAPICH2

Intra-node Point-to-point Performance on ARMv8

0

1000

2000

3000

4000

5000

Band

wid

th (M

B/s) MVAPICH2

0

2000

4000

6000

8000

10000

Bidi

rect

iona

l Ban

dwid

th MVAPICH2

Platform: ARMv8 (aarch64) MIPS processor with 96 cores dual-socket CPU. Each socket contains 48 cores.

0

200

400

600

8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

Available since

MVAPICH2 2.3a

0.74 microsec (4 bytes)

Page 50: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 50Network Based Computing Laboratory

• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +

UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 51: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 51Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 lammps wrf2 tera_tf lu

Exec

utio

n Ti

me

in (S

)

Intel MPI 18.0.0

MVAPICH2 2.3 rc1

2%

6%

1%6%

Performance of SPEC MPI 2007 Benchmarks (KNL + Omni-Path)

Mvapich2 outperforms Intel MPI by up to 10%

448 processeson 7 KNL nodes of TACC Stampede2

(64 ppn)

10%2%

4%

Page 52: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 52Network Based Computing Laboratory

0

20

40

60

80

100

120

milc leslie3d pop2 lammps wrf2 GaP tera_tf lu

Exec

utio

n Ti

me

in (S

)

Intel MPI 18.0.0

MVAPICH2 2.3 rc1

2%

4%

Performance of SPEC MPI 2007 Benchmarks (Skylake + Omni-Path)

MVAPICH2 outperforms Intel MPI by up to 38%

480 processeson 10 Skylake nodes of TACC Stampede2

(48 ppn)

0% 1%

0%

-4%38%

-3%

Page 53: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 53Network Based Computing Laboratory

• MPI runtime has many parameters• Tuning a set of parameters can help you to extract higher performance• Compiled a list of such contributions through the MVAPICH Website

– http://mvapich.cse.ohio-state.edu/best_practices/

• Initial list of applications– Amber– HoomDBlue– HPCG– Lulesh– MILC– Neuron– SMG2000

• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.• We will link these results with credits to you.

Compilation of Best Practices

Page 54: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 54Network Based Computing Laboratory

• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)

– Memcached is also used for Web 2.0

• Deep Learning– Caffe, CNTK, TensorFlow, and many more

• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers

HPC, Big Data, Deep Learning, and Cloud

Page 55: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 55Network Based Computing Laboratory

How Can HPC Clusters with High-Performance Interconnect and Storage Architectures Benefit Big Data Applications?

Bring HPC and Big Data processing into a “convergent trajectory”!

What are the major bottlenecks in current Big

Data processing middleware (e.g. Hadoop, Spark, and Memcached)?

Can the bottlenecks be alleviated with new

designs by taking advantage of HPC

technologies?

Can RDMA-enabledhigh-performance

interconnectsbenefit Big Data

processing?

Can HPC Clusters with high-performance

storage systems (e.g. SSD, parallel file

systems) benefit Big Data applications?

How much performance benefits

can be achieved through enhanced

designs?

How to design benchmarks for evaluating the

performance of Big Data middleware on

HPC clusters?

Page 56: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 56Network Based Computing Laboratory

Designing Communication and I/O Libraries for Big Data Systems: Challenges

Big Data Middleware(HDFS, MapReduce, HBase, Spark and Memcached)

Networking Technologies(InfiniBand, 1/10/40/100 GigE

and Intelligent NICs)

Storage Technologies(HDD, SSD, NVM, and NVMe-

SSD)

Programming Models(Sockets)

Applications

Commodity Computing System Architectures

(Multi- and Many-core architectures and accelerators)

RDMA?

Communication and I/O LibraryPoint-to-Point

Communication

QoS & Fault Tolerance

Threaded Modelsand Synchronization

Performance TuningI/O and File Systems

Virtualization (SR-IOV)

Benchmarks

Upper level Changes?

Page 57: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 57Network Based Computing Laboratory

• RDMA for Apache Spark

• RDMA for Apache Hadoop 2.x (RDMA-Hadoop-2.x)– Plugins for Apache, Hortonworks (HDP) and Cloudera (CDH) Hadoop distributions

• RDMA for Apache HBase

• RDMA for Memcached (RDMA-Memcached)

• RDMA for Apache Hadoop 1.x (RDMA-Hadoop)

• OSU HiBD-Benchmarks (OHB)

– HDFS, Memcached, HBase, and Spark Micro-benchmarks

• http://hibd.cse.ohio-state.edu

• Users Base: 275 organizations from 34 countries

• More than 25,550 downloads from the project site

The High-Performance Big Data (HiBD) Project

Available for InfiniBand and RoCEAlso run on Ethernet

Available for x86 and OpenPOWER

Page 58: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 58Network Based Computing Laboratory

HiBD Release Timeline and Downloads

0

5000

10000

15000

20000

25000

30000

Num

ber o

f Dow

nloa

ds

Timeline

RDM

A-H

adoo

p 2.

x 0.

9.1

RDM

A-H

adoo

p 2.

x 0.

9.6

RDM

A-M

emca

ched

0.9

.4

RDM

A-H

adoo

p 2.

x 0.

9.7

RDM

A-Sp

ark

0.9.

1

RDM

A-H

Base

0.9

.1

RDM

A-H

adoo

p 2.

x 1.

0.0

RDM

A-Sp

ark

0.9.

4

RDM

A-H

adoo

p 2.

x 1.

3.0

RDM

A-M

emca

ched

0.9

.6&

OH

B-0.

9.3

RDM

A-Sp

ark

0.9.

5

RDM

A-H

adoo

p 1.

x 0.

9.0

RDM

A-H

adoo

p 1.

x 0.

9.8

RDM

A-M

emca

ched

0.9

.1 &

OH

B-0.

7.1

RDM

A-H

adoo

p 1.

x 0.

9.9

Page 59: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 59Network Based Computing Laboratory

• HHH: Heterogeneous storage devices with hybrid replication schemes are supported in this mode of operation to have better fault-tolerance as well as performance. This mode is enabled by default in the package.

• HHH-M: A high-performance in-memory based setup has been introduced in this package that can be utilized to perform all I/O operations in-memory and obtain as much performance benefit as possible.

• HHH-L: With parallel file systems integrated, HHH-L mode can take advantage of the Lustre available in the cluster.

• HHH-L-BB: This mode deploys a Memcached-based burst buffer system to reduce the bandwidth bottleneck of shared file system access. The burst buffer design is hosted by Memcached servers, each of which has a local SSD.

• MapReduce over Lustre, with/without local disks: Besides, HDFS based solutions, this package also provides support to run MapReduce jobs on top of Lustre alone. Here, two different modes are introduced: with local disks and without local disks.

• Running with Slurm and PBS: Supports deploying RDMA for Apache Hadoop 2.x with Slurm and PBS in different running modes (HHH, HHH-M, HHH-L, and MapReduce over Lustre).

Different Modes of RDMA for Apache Hadoop 2.x

Page 60: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 60Network Based Computing Laboratory

• Design Features– RDMA based shuffle plugin– SEDA-based architecture– Dynamic connection

management and sharing– Non-blocking data transfer– Off-JVM-heap buffer

management– InfiniBand/RoCE support

Design Overview of Spark with RDMA

• Enables high performance RDMA communication, while supporting traditional socket interface

• JNI Layer bridges Scala based Spark with communication library written in native codeX. Lu, M. W. Rahman, N. Islam, D. Shankar, and D. K. Panda, Accelerating Spark with RDMA for Big Data Processing: Early Experiences, Int'l Symposium on High Performance Interconnects (HotI'14), August 2014

X. Lu, D. Shankar, S. Gugnani, and D. K. Panda, High-Performance Design of Apache Spark with RDMA and Its Benefits on Various Workloads, IEEE BigData ‘16, Dec. 2016.

Spark Core

RDMA Capable Networks(IB, iWARP, RoCE ..)

Apache Spark Benchmarks/Applications/Libraries/Frameworks

1/10/40/100 GigE, IPoIB Network

Java Socket Interface Java Native Interface (JNI)

Native RDMA-based Comm. Engine

Shuffle Manager (Sort, Hash, Tungsten-Sort)

Block Transfer Service (Netty, NIO, RDMA-Plugin)NettyServer

NIOServer

RDMAServer

NettyClient

NIOClient

RDMAClient

Page 61: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 61Network Based Computing Laboratory

Using HiBD Packages for Big Data Processing on Existing HPC Infrastructure

Page 62: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 62Network Based Computing Laboratory

Using HiBD Packages for Big Data Processing on Existing HPC Infrastructure

Page 63: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 63Network Based Computing Laboratory

050

100150200250300350400

80 120 160

Exec

utio

n Ti

me

(s)

Data Size (GB)

IPoIB (EDR)OSU-IB (EDR)

0100200300400500600700800

80 160 240

Exec

utio

n Ti

me

(s)

Data Size (GB)

IPoIB (EDR)OSU-IB (EDR)

Performance Numbers of RDMA for Apache Hadoop 2.x –RandomWriter & TeraGen in OSU-RI2 (EDR)

Cluster with 8 Nodes with a total of 64 maps

• RandomWriter– 3x improvement over IPoIB

for 80-160 GB file size

• TeraGen– 4x improvement over IPoIB for

80-240 GB file size

RandomWriter TeraGen

Reduced by 3x Reduced by 4x

Page 64: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 64Network Based Computing Laboratory

• InfiniBand FDR, SSD, 32/64 Worker Nodes, 768/1536 Cores, (768/1536M 768/1536R)

• RDMA vs. IPoIB with 768/1536 concurrent tasks, single SSD per node. – 32 nodes/768 cores: Total time reduced by 37% over IPoIB (56Gbps)

– 64 nodes/1536 cores: Total time reduced by 43% over IPoIB (56Gbps)

Performance Evaluation of RDMA-Spark on SDSC Comet – HiBench PageRank

32 Worker Nodes, 768 cores, PageRank Total Time 64 Worker Nodes, 1536 cores, PageRank Total Time

050

100150200250300350400450

Huge BigData Gigantic

Tim

e (s

ec)

Data Size (GB)

IPoIB

RDMA

0

100

200

300

400

500

600

700

800

Huge BigData Gigantic

Tim

e (s

ec)

Data Size (GB)

IPoIB

RDMA

43%37%

Page 65: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 65Network Based Computing Laboratory

Performance Evaluation on SDSC Comet: Astronomy Application

• Kira Toolkit1: Distributed astronomy image processing toolkit implemented using Apache Spark.

• Source extractor application, using a 65GB dataset from the SDSS DR2 survey that comprises 11,150 image files.

• Compare RDMA Spark performance with the standard apache implementation using IPoIB.

1. Z. Zhang, K. Barbary, F. A. Nothaft, E.R. Sparks, M.J. Franklin, D.A. Patterson, S. Perlmutter. Scientific Computing meets Big Data Technology: An Astronomy Use Case. CoRR, vol: abs/1507.03325, Aug 2015.

020406080

100120

RDMA Spark Apache Spark(IPoIB)

21 %

Execution times (sec) for Kira SE benchmark using 65 GB dataset, 48 cores.

M. Tatineni, X. Lu, D. J. Choi, A. Majumdar, and D. K. Panda, Experiences and Benefits of Running RDMA Hadoop and Spark on SDSC Comet, XSEDE’16, July 2016

Page 66: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 66Network Based Computing Laboratory

• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)

– Memcached is also used for Web 2.0

• Deep Learning– Caffe, CNTK, TensorFlow, and many more

• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers

HPC, Big Data, Deep Learning, and Cloud

Page 67: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 67Network Based Computing Laboratory

• Deep Learning frameworks are a different game altogether

– Unusually large message sizes (order of megabytes)

– Most communication based on GPU buffers

• Existing State-of-the-art– cuDNN, cuBLAS, NCCL --> scale-up performance

– NCCL2, CUDA-Aware MPI --> scale-out performance• For small and medium message sizes only!

• Proposed: Can we co-design the MPI runtime (MVAPICH2-GDR) and the DL framework (Caffe) to achieve both?

– Efficient Overlap of Computation and Communication

– Efficient Large-Message Communication (Reductions)

– What application co-designs are needed to exploit communication-runtime co-designs?

Deep Learning: New Challenges for MPI Runtimes

Scal

e-up

Per

form

ance

Scale-out Performance

cuDNN

NCCL

gRPC

Hadoop

ProposedCo-

Designs

MPIcuBLAS

A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)

NCCL2

Page 68: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 68Network Based Computing Laboratory

0

10000

20000

30000

40000

50000

512K 1M 2M 4M

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

0100000020000003000000400000050000006000000

8388

608

1677

7216

3355

4432

6710

8864

1342

1772

8

2684

3545

6

5368

7091

2

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

1

10

100

1000

10000

100000

4 16 64 256

1024

4096

1638

4

6553

6

2621

44

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

• 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0

MVAPICH2: Allreduce Comparison with Baidu and OpenMPI

*Available with MVAPICH2-GDR 2.3a

~30X betterMV2 is ~2X better

than Baidu

~10X better OpenMPI is ~5X slower than Baidu

~4X better

Page 69: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 69Network Based Computing Laboratory

1

10

100

1000

10000

100000

Late

ncy

(us)

Message Size (Bytes)

NCCL2 MVAPICH2-GDR

MVAPICH2-GDR vs. NCCL2 – Broadcast Operation• Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases

• MPI_Bcast (MVAPICH2-GDR) vs. ncclBcast (NCCL2) on 16 K-80 GPUs

*Will be available with upcoming MVAPICH2-GDR 2.3b

1

10

100

1000

10000

100000

Late

ncy

(us)

Message Size (Bytes)

NCCL2 MVAPICH2-GDR

~10X better~4X better

1 GPU/node 2 GPUs/node

Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 2 K-80 GPUs, and EDR InfiniBand Inter-connect

Page 70: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 70Network Based Computing Laboratory

MVAPICH2-GDR vs. NCCL2 – Reduce Operation• Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases

• MPI_Reduce (MVAPICH2-GDR) vs. ncclReduce (NCCL2) on 16 GPUs

*Will be available with upcoming MVAPICH2-GDR 2.3b

1

10

100

1000

10000

100000

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~5X better

Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect

1

10

100

1000

4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~2.5X better

Page 71: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 71Network Based Computing Laboratory

MVAPICH2-GDR vs. NCCL2 – Allreduce Operation• Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases

• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs

*Will be available with upcoming MVAPICH2-GDR 2.3b

1

10

100

1000

10000

100000

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~1.2X better

Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect

1

10

100

1000

4 8 16 32 64 128

256

512 1K 2K 4K 8K 16K

32K

64K

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~3X better

Page 72: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 72Network Based Computing Laboratory

• Caffe : A flexible and layered Deep Learning framework.

• Benefits and Weaknesses– Multi-GPU Training within a single node

– Performance degradation for GPUs across different sockets

– Limited Scale-out

• OSU-Caffe: MPI-based Parallel Training – Enable Scale-up (within a node) and Scale-out (across

multi-GPU nodes)

– Scale-out on 64 GPUs for training CIFAR-10 network on CIFAR-10 dataset

– Scale-out on 128 GPUs for training GoogLeNet network on ImageNet dataset

OSU-Caffe: Scalable Deep Learning

0

50

100

150

200

250

8 16 32 64 128

Trai

ning

Tim

e (s

econ

ds)

No. of GPUs

GoogLeNet (ImageNet) on 128 GPUs

Caffe OSU-Caffe (1024) OSU-Caffe (2048)

Invalid use case

OSU-Caffe publicly available from

http://hidl.cse.ohio-state.edu/

Support on OPENPOWER will be available soon

Page 73: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 73Network Based Computing Laboratory

Performance Benefits for RDMA-gRPC with Micro-Benchmark

RDMA-gRPC RPC Latency

• gRPC-RDMA Latency on SDSC-Comet-FDR– Up to 2.7x performance speedup over IPoIB for Latency for small messages– Up to 2.8x performance speedup over IPoIB for Latency for medium messages– Up to 2.5x performance speedup over IPoIB for Latency for large messages

0

15

30

45

60

75

90

2 8 32 128 512 2K 8K

Late

ncy

(us)

payload (Bytes)

Default gRPC

OSU RDMA gRPC

0

200

400

600

800

1000

16K 32K 64K 128K 256K 512KLa

tenc

y (u

s)Payload (Bytes)

Default gRPC

OSU RDMA gRPC

100

3800

7500

11200

14900

18600

1M 2M 4M 8M

Late

ncy

(us)

Payload (Bytes)

Default gRPC

OSU RDMA gRPC

R. Biswas, X. Lu, and D. K. Panda, Accelerating gRPC and TensorFlow with RDMA for High-Performance Deep Learning over InfiniBand, Under Review.

Page 74: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 74Network Based Computing Laboratory

Performance Benefits for RDMA-gRPC with TensorFlow Communication Mimic Benchmark

TensorFlow communication Pattern mimic benchmark

• TensorFlow communication pattern mimic on SDSC-Comet-FDR– Single process spawns both gRPC server and gRPC client– Up to 3.6x performance speedup over IPoIB for Latency– Up to 3.7x throughput improvement over IPoIB

0

90

180

270

2M 4M 8M

Calls

/ Sec

ond

Payload (Bytes)

Default gRPC OSU RDMA gRPC

0

9000

18000

27000

2M 4M 8M

Late

ncy

(us)

Payload (Bytes)

Default gRPC

OSU RDMA gRPC

Page 75: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 75Network Based Computing Laboratory

X. Lu, H. Shi, M. H. Javed, R. Biswas, and D. K. Panda, Characterizing Deep Learning over Big Data (DLoBD) Stacks on RDMA-capable Networks, HotI 2017.

High-Performance Deep Learning over Big Data (DLoBD) Stacks• Benefits of Deep Learning over Big Data (DLoBD)

Easily integrate deep learning components into Big Data processing workflow

Easily access the stored data in Big Data systems No need to set up new dedicated deep learning clusters;

Reuse existing big data analytics clusters • Challenges

Can RDMA-based designs in DLoBD stacks improve performance, scalability, and resource utilization on high-performance interconnects, GPUs, and multi-core CPUs?

What are the performance characteristics of representative DLoBD stacks on RDMA networks?

• Characterization on DLoBD Stacks CaffeOnSpark, TensorFlowOnSpark, and BigDL IPoIB vs. RDMA; In-band communication vs. Out-of-band

communication; CPU vs. GPU; etc. Performance, accuracy, scalability, and resource utilization RDMA-based DLoBD stacks (e.g., BigDL over RDMA-Spark)

can achieve 2.6x speedup compared to the IPoIB based scheme, while maintain similar accuracy

0

20

40

60

10

1010

2010

3010

4010

5010

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Acc

urac

y (%

)

Epoc

hs T

ime (

secs

)

Epoch Number

IPoIB-TimeRDMA-TimeIPoIB-AccuracyRDMA-Accuracy

2.6X

Page 76: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 76Network Based Computing Laboratory

• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)

– Memcached is also used for Web 2.0

• Deep Learning– Caffe, CNTK, TensorFlow, and many more

• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers

HPC, Big Data, Deep Learning, and Cloud

Page 77: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 77Network Based Computing Laboratory

• Virtualization has many benefits– Fault-tolerance– Job migration– Compaction

• Have not been very popular in HPC due to overhead associated with Virtualization

• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field

• Enhanced MVAPICH2 support for SR-IOV• MVAPICH2-Virt 2.2 supports:

– OpenStack, Docker, and singularity

Can HPC and Virtualization be Combined?

J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15

Page 78: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 78Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 GAPgeofem zeusmp2 lu

Exec

utio

n Ti

me

(s)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native

1%9.5%

0

1000

2000

3000

4000

5000

6000

22,20 24,10 24,16 24,20 26,10 26,16

Exec

utio

n Ti

me

(ms)

Problem Size (Scale, Edgefactor)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native2%

• 32 VMs, 6 Core/VM

• Compared to Native, 2-5% overhead for Graph500 with 128 Procs

• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Application-Level Performance on Chameleon

SPEC MPI2007Graph500

5%

Page 79: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 79Network Based Computing Laboratory

0

500

1000

1500

2000

2500

3000

22,16 22,20 24,16 24,20 26,16 26,20

BFS

Exec

utio

n Ti

me

(ms)

Problem Size (Scale, Edgefactor)

Graph500

Singularity

Native

0

50

100

150

200

250

300

CG EP FT IS LU MG

Exec

utio

n Ti

me

(s)

NPB Class D

Singularity

Native

• 512 Processes across 32 nodes

• Less than 7% and 6% overhead for NPB and Graph500, respectively

Application-Level Performance on Singularity with MVAPICH2

7%

6%

J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?,

UCC ’17, Best Student Paper Award

Page 80: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 80Network Based Computing Laboratory

0

5000

10000

15000

20000

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

Docker Native

02000400060008000

1000012000

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

Docker Native

1

10

100

1000

1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M

Late

ncy

(us)

Message Size (Bytes)

GPU-GPU Inter-node Latency

Docker Native

MVAPICH2-GDR-2.3aIntel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

MVAPICH2-GDR on Container with Negligible Overhead

Page 81: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 81Network Based Computing Laboratory

• Next generation HPC systems need to be designed with a holistic view of Big Data, Deep Learning and Cloud

• Presented some of the approaches and results along these directions

• Enable Big Data, Deep Learning and Cloud community to take advantage of modern HPC technologies

• Many other open issues need to be solved

• Next generation HPC professionals need to be trained along these directions

Concluding Remarks

Page 82: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 82Network Based Computing Laboratory

• Talk: Exploiting Computation and Communication Overlap in MVAPICH2 and MVAPICH2-GDR MPI Libraries • 04/04/18 at 9:00 am at Overlapping Communication with Computation Symposium

• Tutorial: How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries • 04/05/18, 9:00 am-12:00 noon

• Tutorial: High Performance Distributed Deep Learning: A Beginner’s Guide• 04/05/18, 1:00pm-4:00 pm

Additional Presentation and Tutorials

Page 83: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 83Network Based Computing Laboratory

Funding AcknowledgmentsFunding Support by

Equipment Support by

Page 84: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 84Network Based Computing Laboratory

Personnel AcknowledgmentsCurrent Students (Graduate)

– A. Awan (Ph.D.)

– R. Biswas (M.S.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)– C.-H. Chu (Ph.D.)

– S. Guganani (Ph.D.)

Past Students – A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

Past Research Scientist– K. Hamidouche

– S. Sur

Past Post-Docs– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– M. Li (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– J. Hashmi (Ph.D.)

– H. Javed (Ph.D.)– P. Kousha (Ph.D.)

– D. Shankar (Ph.D.)

– H. Shi (Ph.D.)

– J. Zhang (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Scientists– X. Lu

– H. Subramoni

Past Programmers– D. Bureddy

– J. Perkins

Current Research Specialist– J. Smith

– M. Arnold

– S. Marcarelli

– J. Vienne

– H. Wang

Current Post-doc– A. Ruhela

– K. Manian

Current Students (Undergraduate)– N. Sarkauskas (B.S.)

Page 85: Software Libraries and Middleware for Exascale Systems · 2020-01-06 · • Programming models provide abstract machine models • Models can be mapped on different types of systems

SEA Symposium (April ’18) 85Network Based Computing Laboratory

Thank You!

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

[email protected]

The High-Performance MPI/PGAS Projecthttp://mvapich.cse.ohio-state.edu/

The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/

The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/