scalable and distributed dnn training on modern hpc systems · 2020-01-16 · scalable and...

50
Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Talk at HPC-Advisory Council Switzerland Conference (April ’19) by

Upload: others

Post on 22-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

Scalable and Distributed DNN Training on Modern HPC Systems

Dhabaleswar K. (DK) Panda

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Talk at HPC-Advisory Council Switzerland Conference (April ’19)

by

Page 2: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 2Network Based Computing Laboratory

Understanding the Deep Learning Resurgence

Courtesy: http://www.deeplearningbook.org/contents/intro.html

• Deep Learning is a sub-set of Machine

Learning

– But, it is perhaps the most radical and

revolutionary subset

– Automatic feature extraction vs. hand-crafted

features

• Deep Learning

– A renewed interest and a lot of hype!

– Key success: Deep Neural Networks (DNNs)

– Everything was there since the late 80s except

the “computability of DNNs”

Page 3: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 3Network Based Computing Laboratory

Deep Learning Use Cases and Growth Trends

Courtesy: https://www.top500.org/news/market-for-artificial-intelligence-projected-to-hit-36-billion-by-2025/

Page 4: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 4Network Based Computing Laboratory

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Increasing Usage of HPC, Big Data and Deep Learning

Convergence of HPC, Big Data, and Deep Learning!

Increasing Need to Run these applications on the Cloud!!

Page 5: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 5Network Based Computing Laboratory

(1) Prepare Datasets @Scale

(2) Deep Learning @Scale

(3) Non-deep learning

analytics @Scale

(4) Apply ML model @Scale

• Deep Learning over Big Data (DLoBD) is one of the most efficient analyzing paradigms

• More and more deep learning tools or libraries (e.g., Caffe, TensorFlow) start running over big

data stacks, such as Apache Hadoop and Spark

• Benefits of the DLoBD approach

– Easily build a powerful data analytics pipeline

• E.g., Flickr DL/ML Pipeline, “How Deep Learning Powers Flickr”, http://bit.ly/1KIDfof

– Better data locality

– Efficient resource sharing and cost effective

Newer Workflows - Deep Learning over Big Data (DLoBD)

Page 6: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 6Network Based Computing Laboratory

Drivers of Modern HPC Cluster Architectures

• Multi-core/many-core technologies

• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)

• Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD

• Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)

• Available on HPC Clouds, e.g., Amazon EC2, NSF Chameleon, Microsoft Azure, etc.

Accelerators / Coprocessors high compute density, high

performance/watt>1 TFlop DP on a chip

High Performance Interconnects -InfiniBand

<1usec latency, 200Gbps Bandwidth>Multi-core Processors SSD, NVMe-SSD, NVRAM

K - ComputerSunway TaihuLightSummit Sierra

Page 7: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 7Network Based Computing Laboratory

• Deep Learning has two major tasks

1. Training of the Deep Neural Network

2. Inference (or deployment) that uses a trained DNN

• DNN Training

– Training is a compute/communication intensive process – can take days to

weeks

– Faster training is necessary!

• Faster training can be achieved by

– Using Newer and Faster Hardware – But, there is a limit!

– Can we use more GPUs or nodes?

• The need for Parallel and Distributed Training

Key Phases of Deep Learning

Page 8: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 8Network Based Computing Laboratory

• Scale-up: Intra-node Communication

– Many improvements like:

• NVIDIA cuDNN, cuBLAS, NCCL, etc.

• CUDA 9 Co-operative Groups

• Scale-out: Inter-node Communication

– DL Frameworks – most are optimized for

single-node only

– Distributed (Parallel) Training is an emerging

trend

• OSU-Caffe – MPI-based

• Microsoft CNTK – MPI/NCCL2

• Google TensorFlow – gRPC-based/MPI/NCCL2

• Facebook Caffe2 – Hybrid (NCCL2/Gloo/MPI)

Scale-up and Scale-out

Scal

e-u

p P

erf

orm

ance

Scale-out Performance

cuDNN

gRPC

Hadoop

MPI

MKL-DNN

Desired

NCCL2

Page 9: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 9Network Based Computing Laboratory

Holistic Evaluation is Important!!DLApplications(ImageRecognition,SpeechProcessing,etc.)

DLFrameworks(Caffe,TensorFlow,etc.)

BLASLibraries

Hardware

Many-coreGPU(PascalP100)

GenericConvolutionLayer

MKLOptimizedConvolutionLayer

MKL2017 cuDNN/cuBLAS

Multi-/Many-core(Xeon,XeonPhi)

cuDNN OptimizedConvolutionLayer

OtherBLASLibraries

OpenBLASATLAS

OtherProcessors

• My framework is faster than

your framework!

• This needs to be understood

in a holistic way.

• Performance depends on

the entire execution

environment (the full stack)

• Isolated view of

performance is not helpful

A. A. Awan, H. Subramoni, and Dhabaleswar K. Panda. “An In-depth Performance Characterization of CPU- and GPU-based DNN Training on Modern Architectures”, In Proceedings of the Machine Learning on HPC Environments (MLHPC'17). ACM, New York, NY, USA, Article 8.

Page 10: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 10Network Based Computing Laboratory

How to efficiently scale-out a

Deep Learning (DL) framework and take

advantage of heterogeneous

High Performance Computing (HPC)

resources?

Broad Challenge: Exploiting HPC for Deep Learning

Page 11: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 11Network Based Computing Laboratory

1. What are the fundamental

issues in designing DL

frameworks?

– Memory Requirements

– Computation

Requirements

– Communication Overhead

2. Why do we need to support

distributed training?

– To overcome the limits of

single-node training

– To better utilize hundreds

of existing HPC Clusters

Research Challenges to Exploit HPC Technologies

InfiniBand GPUCPU

CNTK

Gradient Aggregation

Model PropagationForward

Backward

Deep Learning and Machine Learning Frameworks

Caffe/OSU-Caffe

Caffe2 TensorFlow MXNet

Communication Runtimes to support Distributed Training

HPC Platforms

Major Computation and Communication Phases in DL Frameworks

1

2

Page 12: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 12Network Based Computing Laboratory

3. What are the new design challenges

brought forward by DL frameworks for

Communication runtimes?

– Large Message Collective

Communication and Reductions

– GPU Buffers (CUDA-Awareness)

4. Can a Co-design approach help in

achieving Scale-up and Scale-out efficiently?

– Co-Design the support at Runtime

level and Exploit it at the DL

Framework level

– What performance benefits can

be observed?

– What needs to be fixed at the

communication runtime layer?

Research Challenges to Exploit HPC Technologies (Cont’d)

CUDA-Awareness

InfiniBand GPUCPU

Large-message Collectives

CNTK

Point-to-Point

Operations

Gradient Aggregation

Model PropagationForward

Backward

Deep Learning and Machine Learning Frameworks

Caffe/OSU-Caffe

Caffe2 TensorFlow MXNet

Communication Runtimes (MPI/NCCL/Gloo/MLSL)

HPC Platforms

Major Computation and Communication Phases in DL Frameworks

3

4 Co-Design Opportunities

Page 13: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 13Network Based Computing Laboratory

• MPI-driven Deep Learning

– CPU-based Deep Learning

– GPU-based Deep Learning

• Co-designing Deep Learning Stacks with High-Performance MPI

• Out-of-core DNN training

• Accelerating TensorFlow on HPC Systems

• Accelerating Big Data Stacks

• Efficient Deep Learning over Big Data

Multiple Approaches taken up by OSU

Page 14: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 14Network Based Computing Laboratory

Data Parallel Deep Learning and MPI Collectives

MPI_Bcast (GPU 0)

packed_comm_buff

L1

L2

..

Ln

F

L1

L2

..

Ln

L1

L2

..

Ln

L1

L2

..

Ln

Params

GP

U 0 Params

GP

U 1 Params

GP

U 2 Params

GP

U 3

Gradients

1. Data

Propagation

2. Forward

Backward

Pass

3. Gradient

Aggregatio

n

B F B F B F B

packed_red

uce_buffpacked_red

uce_buff

packed_red

uce_buff

packed_red

uce_buff

ApplyUpdates

MPI_Reduce (GPU 0)

Loop {}• Major MPI Collectives

involved in Designing

distributed frameworks

• MPI_Bcast – required for

DNN parameter exchange

• MPI_Reduce – needed for

gradient accumulation

from multiple solvers

• MPI_Allreduce – use just

one Allreduce instead of

Reduce and Broadcast

A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)

Page 15: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 15Network Based Computing Laboratory

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 3,000 organizations in 88 countries

– More than 531,000 (> 0.5 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘18 ranking)

• 3rd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China

• 14th, 556,104 cores (Oakforest-PACS) in Japan

• 17th, 367,024 cores (Stampede2) at TACC

• 27th, 241,108-core (Pleiades) at NASA and many others

– Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

Partner in the upcoming TACC Frontera System

Page 16: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 16Network Based Computing Laboratory

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-

point

Primitives

Collectives

Algorithms

Energy-

Awareness

Remote

Memory

Access

I/O and

File Systems

Fault

ToleranceVirtualization

Active

MessagesJob Startup

Introspection

& Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODPSR-

IOV

Multi

Rail

Transport Mechanisms

Shared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

XPMEM*

Page 17: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 17Network Based Computing Laboratory

MVAPICH2 Software Family (CPU-Based Deep Learning)High-Performance Parallel Programming Libraries

MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE

MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime

MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications

MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud

MVAPICH2-EA Energy aware and High-performance MPI

MVAPICH2-MIC Optimized MPI for clusters with Intel KNC

Microbenchmarks

OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration

OEMT Utility to measure the energy consumption of MPI applications

Page 18: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 18Network Based Computing Laboratory

Performance of CNTK with MVAPICH2-X on CPU-based Deep Learning

0

200

400

600

800

28 56 112 224

Exec

uti

on

Tim

e (s

)

No. of Processes

Intel MPIMVAPICH2MVAPICH2-XPMEM

CNTK AlexNet Training

(B.S=default, iteration=50, ppn=28)

20%

9%

• CPU-based training of AlexNet neural

network using ImageNet ILSVRC2012

dataset

• Advanced XPMEM-based designs show

up to 20% benefits over Intel MPI (IMPI)

for CNTK DNN training using All_Reduce

• The proposed designs show good

scalability with increasing system size

Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and DK Panda, 32nd IEEE International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018

Available since MVAPICH2-X 2.3rc1 release

Page 19: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 19Network Based Computing Laboratory

• CPU-based distributed TensorFlow

Benchmarks (TF) benchmark

– tf_cnn_benchmark tests

• AlexNet model training

– ImageNet ILSVRC2012 dataset

• Advanced SALaR and XPMEM based

designs in MVAPICH-X showed good

scalability

• Up to 15% and 35% improvements in

number of images per second at 448 and

896 processes, respectively.

Performance of TensorFlow with MVAPICH2-X on CPU

500

600

112 224 448 896

Sam

ple

s/S

econd

Number of Processes

MVAPICH2 2.3rc2 SALaR−SHMEM SALaR−XPMEM

0

100

200

300

400

TensorFlow Images per Second

(higher is better)

35%

SALaR: Scalable and Adaptive Designs for Large Message Reduction Collectives, M. Bayatpour, J. Hashmi, S. Chakraborty, H. Subramoni, P. Kousha, and DK Panda IEEE Cluster 2018, Sep 2018 [Best Paper in Architecture Track]

Will be available in future MVAPICH2-X releases

Page 20: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 20Network Based Computing Laboratory

MVAPICH2 Software Family (GPU-Based Deep Learning) High-Performance Parallel Programming Libraries

MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE

MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime

MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications

MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud

MVAPICH2-EA Energy aware and High-performance MPI

MVAPICH2-MIC Optimized MPI for clusters with Intel KNC

Microbenchmarks

OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration

OEMT Utility to measure the energy consumption of MPI applications

Page 21: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 21Network Based Computing Laboratory

At Sender:

At Receiver:

MPI_Recv(r_devbuf, size, …);

inside

MVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 22: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 22Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3.1 Releases

• Support for MPI communication from NVIDIA GPU device memory

• High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU)

• High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)

• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node

• Optimized and tuned collectives for GPU device buffers

• MPI datatype support for point-to-point and collective communication from GPU device buffers

• Unified memory

Page 23: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 23Network Based Computing Laboratory

• MVAPICH2-GDR 2.3.1 requires the following software to be installed on your system:

1. Mellanox OFED 3.2 and later

2. NVIDIA Driver 367.48 or later

3. NVIDIA CUDA Toolkit 7.5 and later

4. NVIDIA Peer Memory (nv_peer_mem) module to enable GPUDirect RDMA (GDR) support

• Strongly Recommended for Best Performance

5. GDRCOPY Library by NVIDIA: https://github.com/NVIDIA/gdrcopy

• Comprehensive Instructions can be seen from the MVAPICH2-GDR User Guide:

– http://mvapich.cse.ohio-state.edu/userguide/gdr/

MVAPICH2-GDR: Pre-requisites for OpenPOWER & x86 Systems

Page 24: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 24Network Based Computing Laboratory

• Simple Installation steps for both systems

• Pick the right MVAPICH2-GDR RPM from Downloads page:

– http://mvapich.cse.ohio-state.edu/downloads/

– e.g. http://mvapich.cse.ohio-state.edu/download/mvapich/gdr/2.3/mofed4.5/mvapich2-gdr-

mcast.cuda10.0.mofed4.5.gnu4.8.5-2.3-1.el7.x86_64.rpm (== <mv2-gdr-rpm-name>.rpm)

$ wget http://mvapich.cse.ohio-state.edu/download/mvapich/gdr/2.3/<mv2-gdr-rpm-

name>.rpm

Root Users:

$ rpm -Uvh --nodeps <mv2-gdr-rpm-name>.rpm

Non-Root Users:

$ rpm2cpio <mv2-gdr-rpm-name>.rpm | cpio – id

• Contact MVAPICH help list with any questions related to the package

[email protected]

MVAPICH2-GDR: Download and Setup on OpenPOWER & x86 Systems

Page 25: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 25Network Based Computing Laboratory

• Released on 03/16/2018

• Major Features and Enhancements

– Based on MVAPICH2 2.3.1

– Enhanced intra-node and inter-node point-to-point performance for DGX-2 and IBM POWER8 and IBM POWER9 systems

– Enhanced Allreduce performance for DGX-2 and IBM POWER8/POWER9 systems

– Enhanced small message performance for CUDA-Aware MPI_Put and MPI_Get

– Support for PGI 18.10

– Flexible support for running TensorFlow (Horovod) jobs

– Add support for Volta (V100) GPU

– Support for OpenPOWER with NVLink

– Efficient Multiple CUDA stream-based IPC communication for multi-GPU systems with and without NVLink

– Leverage Linux Cross Memory Attach (CMA) feature for enhanced host-based communication

– InfiniBand Multicast (IB-MCAST) based designs for GPU-based broadcast and streaming applications

– Efficient broadcast designs for Deep Learning applications

MVAPICH2-GDR 2.3.1

Page 26: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 26Network Based Computing Laboratory

0

2000

4000

6000

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3

0

1000

2000

3000

4000

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3

0

10

20

300 1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

Late

ncy

(u

s)

Message Size (Bytes)

GPU-GPU Inter-node Latency

MV2-(NO-GDR) MV2-GDR 2.3

MVAPICH2-GDR-2.3Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

10x

9x

Optimized MVAPICH2-GDR Design

1.85us11X

Page 27: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 27Network Based Computing Laboratory

• TensorFlow is the most popular

DL framework

• gRPC is the official distributed

training runtime

– Many problems for HPC use-

cases

• Community efforts - Baidu and

Uber’s Horovod have added MPI

support to TF across nodes

• Need to understand several

options currently available →

Distributed Training using TensorFlow (TF)

Distributed TensorFlow

gRPCAccelerated

gRPC

gRPC+X

gRPC+MPI

gRPC+Verbs

gRPC+GDR

No-gRPC

Baidu-MPI

Horovod

MPI

NCCL

Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation”,(To be presented) CCGrid ‘19. https://arxiv.org/abs/1810.11112

Page 28: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 28Network Based Computing Laboratory

• Efficient Allreduce is crucial for

Horovod’s overall training performance

– Both MPI and NCCL designs are available

• We have evaluated Horovod extensively

and compared across a wide range of

designs using gRPC and gRPC extensions

• MVAPICH2-GDR achieved up to 90%

scaling efficiency for ResNet-50 Training

on 64 Pascal GPUs

Scalable TensorFlow using Horovod, MPI, and NCCL

0

100

200

300

400

500

600

700

800

900

1000

1 2 4 8 16

Ima

ges/

seco

nd (

Hig

her

is b

ette

r)

No. of GPUs

Horovod-MPI Horovod-NCCL2 Horovod-MPI-Opt (Proposed) Ideal

1

4

16

64

256

1024

4096

16384

1 2 4 8 16 32 64

Imag

es/s

eco

nd (

Hig

her

is

bet

ter)

No. of Nodes (GPUs)

Horovod-NCCL2 Horovod-MPI-Opt Ideal

Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation”, (To be presented) CCGrid ‘19. https://arxiv.org/abs/1810.11112

Page 29: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 29Network Based Computing Laboratory

0

10000

20000

30000

40000

50000

512K 1M 2M 4M

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

0

1000000

2000000

3000000

4000000

5000000

6000000

83

88

608

16

77

721

6

33

55

443

2

67

10

886

4

13

42

177

28

26

84

354

56

53

68

709

12

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

1

10

100

1000

10000

100000

4

16

64

25

6

10

24

40

96

16

38

4

65

53

6

26

21

44

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

• 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0

MVAPICH2-GDR: Allreduce Comparison with Baidu and OpenMPI

*Available since MVAPICH2-GDR 2.3a

~30X betterMV2 is ~2X better

than Baidu

~10X better OpenMPI is ~5X slower

than Baidu

~4X better

Page 30: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 30Network Based Computing Laboratory

MVAPICH2-GDR vs. NCCL2 – Allreduce Operation

• Optimized designs in MVAPICH2-GDR 2.3 offer better/comparable performance for most cases

• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs

1

10

100

1000

10000

100000

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~1.2X better

Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect

1

10

100

1000

4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~3X better

Page 31: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 31Network Based Computing Laboratory

MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2)

• Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases

• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs)

1

10

100

1000

10000

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2-GDR-Next NCCL-2.3

~1.7X better

Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2

0

10

20

30

40

50

60

8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

Late

ncy

(u

s)

Message Size (Bytes)

MVAPICH2-GDR-Next NCCL-2.3

~2.5X better

Page 32: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 32Network Based Computing Laboratory

Distributed Training with TensorFlow and MVAPICH2-GDR

• ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (8 Volta GPUs)

0

500

1000

1500

2000

2500

3000

1 2 4 8

Imag

e p

er s

eco

nd

Number of GPUs

NCCL-2.3 MVAPICH2-GDR-Next

7.5% higher

Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2

75

80

85

90

95

100

1 2 4 8

Scal

ing

Effi

cien

cy (

%)

Number of GPUs

NCCL-2.3 MVAPICH2-GDR-Next

Scaling Efficiency =Actual throughput

Ideal throughput at scale× 100%

Page 33: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 33Network Based Computing Laboratory

• Caffe : A flexible and layered Deep Learning framework.

• Benefits and Weaknesses

– Multi-GPU Training within a single node

– Performance degradation for GPUs across different

sockets

– Limited Scale-out

• OSU-Caffe: MPI-based Parallel Training

– Enable Scale-up (within a node) and Scale-out (across

multi-GPU nodes)

– Scale-out on 64 GPUs for training CIFAR-10 network on

CIFAR-10 dataset

– Scale-out on 128 GPUs for training GoogLeNet network on

ImageNet dataset

OSU-Caffe: Scalable Deep Learning

0

50

100

150

200

250

8 16 32 64 128

Trai

nin

g Ti

me

(sec

on

ds)

No. of GPUs

GoogLeNet (ImageNet) on 128 GPUs

Caffe OSU-Caffe (1024) OSU-Caffe (2048)

Invalid use caseOSU-Caffe publicly available from

http://hidl.cse.ohio-state.edu/

Page 34: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 34Network Based Computing Laboratory

Training Large (Out-of-core) Models• Large DNNs cannot be trained on GPUs due to memory limitation!

– ResNet-50 for Image Recognition but current frameworks can

only go up to a small batch size of 45

– Next generation models like Neural Machine Translation

(NMT) are ridiculously large, consists of billions of parameters,

and require even more memory

– Can we design Out-of-core DNN training support using new

software features in CUDA 8/9 and hardware mechanisms in

Pascal/Volta GPUs?

• General intuition is that managed allocations “will be” slow!

– The proposed framework called OC-Caffe (Out-of-Core Caffe)

shows the potential of managed memory designs that can

provide performance with negligible/no overhead.

• OC-Caffe-Opt: up to 80% better than Intel-optimized CPU Caffe for

ResNet-50 training on the Volta V100 GPU with CUDA9 and CUDNN7

256512

1024

2048

4096

32 64256

512

1024

50 100 150 200 250

0

500

1000

1500

2000

2500

3000

3500

4000

4500

Ba

tch

Siz

e

Trainability(MemoryRequirements)

AlexNet GoogLeNet VGG

Out-of-coreTrainingP100GPU MemoryLimit(16GB)

A. Awan et al., OC-DNN: Exploiting Advanced Unified Memory Capabilities in CUDA 9 and Volta GPUs for Out-of-Core DNN Training, HiPC ’18

0

5

10

15

20

Ima

ges/

sec

(Hig

her

is b

ett

er)

caffe-gpu oc-caffe-naïve oc-caffe-opt

caffe-cpu intel-caffe intel-caffe-opt

oc-caffe-opt is80% better than

intel-caffecaffe-gpu

cannot

run

X

intel-caffe-opt

(N/A)

X

Page 35: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 35Network Based Computing Laboratory

• MPI-driven Deep Learning

• Co-designing Deep Learning Stacks with High-Performance MPI

• Out-of-core DNN training

• Accelerating TensorFlow on HPC Systems

• Accelerating Big Data Stacks

• Efficient Deep Learning over Big Data

Multiple Approaches taken up by OSU

Page 36: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 36Network Based Computing Laboratory

Architecture Overview of gRPC

Key Features:• Simple service definition• Works across languages and platforms

• C++, Java, Python, Android Java etc• Linux, Mac, Windows.

• Start quickly and scale• Bi-directional streaming and integrated

authentication• Used by Google (several of Google’s cloud

products and Google externally facing APIs,TensorFlow), Netflix, Docker, Cisco, JuniperNetworks etc.

• Uses sockets for communication!

Source: http://www.grpc.io/

Large-scale distributed systems composed of micro services

Page 37: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 37Network Based Computing Laboratory

Performance Benefits for RDMA-gRPC with Micro-Benchmark

RDMA-gRPC RPC Latency

• gRPC-RDMA Latency on SDSC-Comet-FDR– Up to 2.7x performance speedup over IPoIB for Latency for small messages

– Up to 2.8x performance speedup over IPoIB for Latency for medium messages

– Up to 2.5x performance speedup over IPoIB for Latency for large messages

0

15

30

45

60

75

90

2 8 32 128 512 2K 8K

Late

ncy

(u

s)

payload (Bytes)

Default gRPC

OSU RDMA gRPC

0

200

400

600

800

1000

16K 32K 64K 128K 256K 512KLa

ten

cy (

us)

Payload (Bytes)

Default gRPC

OSU RDMA gRPC

100

3800

7500

11200

14900

18600

1M 2M 4M 8M

Late

ncy

(u

s)

Payload (Bytes)

Default gRPC

OSU RDMA gRPC

R. Biswas, X. Lu, and D. K. Panda, Accelerating gRPC and TensorFlow with RDMA for High-Performance Deep Learning over InfiniBand, HiPC ‘18.

Page 38: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 38Network Based Computing Laboratory

0

50

100

150

200

16 32 64

Imag

es /

Sec

on

d

Batch Size

gRPPC (IPoIB-100Gbps)

Verbs (RDMA-100Gbps)

MPI (RDMA-100Gbps)

AR-gRPC (RDMA-100Gbps)

Performance Benefit for RDMA-TensorFlow (Inception3)

• TensorFlow Inception3 performance evaluation on an IB EDR cluster

– Up to 20% performance speedup over Default gRPC (IPoIB) for 8 GPUs

– Up to 34% performance speedup over Default gRPC (IPoIB) for 16 GPUs

– Up to 37% performance speedup over Default gRPC (IPoIB) for 24 GPUs

4 Nodes (8 GPUS) 8 Nodes (16 GPUS) 12 Nodes (24 GPUS)

0

200

400

600

16 32 64

Imag

es /

Sec

on

d

Batch Size

gRPPC (IPoIB-100Gbps)Verbs (RDMA-100Gbps)MPI (RDMA-100Gbps)AR-gRPC (RDMA-100Gbps)

0

100

200

300

400

16 32 64Im

ages

/ S

eco

nd

Batch Size

gRPPC (IPoIB-100Gbps)

Verbs (RDMA-100Gbps)

MPI (RDMA-100Gbps)

AR-gRPC (RDMA-100Gbps)

R. Biswas, X. Lu, and D. K. Panda,Accelerating TensorFlow with Adaptive RDMA-based gRPC. HiPC ‘18

Page 39: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 39Network Based Computing Laboratory

• High-Performance Design of TensorFlow over RDMA-enabled Interconnects

– High performance RDMA-enhanced design with native InfiniBand support at the verbs-level for gRPC and TensorFlow

– RDMA-based data communication

– Adaptive communication protocols

– Dynamic message chunking and accumulation

– Support for RDMA device selection

– Easily configurable for different protocols (native InfiniBand and IPoIB)

• Current release: 0.9.1

– Based on Google TensorFlow 1.3.0

– Tested with

• Mellanox InfiniBand adapters (e.g., EDR)

• NVIDIA GPGPU K80

• Tested with CUDA 8.0 and CUDNN 5.0

– http://hidl.cse.ohio-state.edu

RDMA-TensorFlow Distribution

Page 40: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 40Network Based Computing Laboratory

• MPI-driven Deep Learning

• Co-designing Deep Learning Stacks with High-Performance MPI

• Out-of-core DNN Training

• Accelerating TensorFlow on HPC Systems

• Accelerating Big Data Stacks

• Efficient Deep Learning over Big Data

Multiple Approaches taken up by OSU

Page 41: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 41Network Based Computing Laboratory

• RDMA for Apache Spark

• RDMA for Apache Hadoop 3.x (RDMA-Hadoop-3.x)

• RDMA for Apache Hadoop 2.x (RDMA-Hadoop-2.x)

– Plugins for Apache, Hortonworks (HDP) and Cloudera (CDH) Hadoop distributions

• RDMA for Apache Kafka

• RDMA for Apache HBase

• RDMA for Memcached (RDMA-Memcached)

• RDMA for Apache Hadoop 1.x (RDMA-Hadoop)

• OSU HiBD-Benchmarks (OHB)

– HDFS, Memcached, HBase, and Spark Micro-benchmarks

• http://hibd.cse.ohio-state.edu

• Users Base: 305 organizations from 35 countries

• More than 29,500 downloads from the project site

The High-Performance Big Data (HiBD) Project

Available for InfiniBand and RoCE

Also run on Ethernet

Available for x86 and OpenPOWER

Support for Singularity and Docker

Page 42: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 42Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

80 120 160

Exec

uti

on

Tim

e (s

)

Data Size (GB)

IPoIB (EDR)OSU-IB (EDR)

0

100

200

300

400

500

600

700

800

80 160 240

Exec

uti

on

Tim

e (s

)

Data Size (GB)

IPoIB (EDR)OSU-IB (EDR)

Performance Numbers of RDMA for Apache Hadoop 2.x –RandomWriter & TeraGen in OSU-RI2 (EDR)

Cluster with 8 Nodes with a total of 64 maps

• RandomWriter

– 3x improvement over IPoIB

for 80-160 GB file size

• TeraGen

– 4x improvement over IPoIB for

80-240 GB file size

RandomWriter TeraGen

Reduced by 3x Reduced by 4x

Page 43: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 43Network Based Computing Laboratory

• InfiniBand FDR, SSD, 32/64 Worker Nodes, 768/1536 Cores, (768/1536M 768/1536R)

• RDMA-based design for Spark 1.5.1

• RDMA vs. IPoIB with 768/1536 concurrent tasks, single SSD per node.

– 32 nodes/768 cores: Total time reduced by 37% over IPoIB (56Gbps)

– 64 nodes/1536 cores: Total time reduced by 43% over IPoIB (56Gbps)

Performance Evaluation on SDSC Comet – HiBench PageRank

32 Worker Nodes, 768 cores, PageRank Total Time 64 Worker Nodes, 1536 cores, PageRank Total Time

0

50

100

150

200

250

300

350

400

450

Huge BigData Gigantic

Tim

e (s

ec)

Data Size (GB)

IPoIB

RDMA

0

100

200

300

400

500

600

700

800

Huge BigData Gigantic

Tim

e (s

ec)

Data Size (GB)

IPoIB

RDMA

43%37%

Page 44: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 44Network Based Computing Laboratory

X. Lu, H. Shi, M. H. Javed, R. Biswas, and D. K. Panda, Characterizing Deep Learning over Big Data (DLoBD) Stacks on RDMA-capable Networks, HotI 2017.

High-Performance Deep Learning over Big Data (DLoBD) Stacks• Challenges of Deep Learning over Big Data

(DLoBD)▪ Can RDMA-based designs in DLoBD stacks improve

performance, scalability, and resource utilization on high-performance interconnects, GPUs, and multi-core CPUs?

▪ What are the performance characteristics of representative DLoBD stacks on RDMA networks?

• Characterization on DLoBD Stacks▪ CaffeOnSpark, TensorFlowOnSpark, and BigDL▪ IPoIB vs. RDMA; In-band communication vs. Out-

of-band communication; CPU vs. GPU; etc.▪ Performance, accuracy, scalability, and resource

utilization ▪ RDMA-based DLoBD stacks (e.g., BigDL over

RDMA-Spark) can achieve 2.6x speedup compared to the IPoIB based scheme, while maintain similar accuracy

0

20

40

60

10

1010

2010

3010

4010

5010

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Acc

ura

cy (

%)

Ep

och

s T

ime

(sec

s)

Epoch Number

IPoIB-TimeRDMA-TimeIPoIB-AccuracyRDMA-Accuracy

2.6X

Page 45: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 45Network Based Computing Laboratory

• Supported through X-ScaleSolutions (http://x-scalesolutions.com)

• Benefits:

– Help and guidance with installation of the library

– Platform-specific optimizations and tuning

– Timely support for operational issues encountered with the library

– Web portal interface to submit issues and tracking their progress

– Advanced debugging techniques

– Application-specific optimizations and tuning

– Obtaining guidelines on best practices

– Periodic information on major fixes and updates

– Information on major releases

– Help with upgrading to the latest release

– Flexible Service Level Agreements

• Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years

Commercial Support for MVAPICH2, HiBD, and HiDL Libraries

Page 46: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 46Network Based Computing Laboratory

• Recently joined the OpenPOWER Consortium as a silver ISV member

• Provides flexibility:

– To have MVAPICH2, HiDL and HiBD libraries getting integrated into the OpenPOWER software

stack

– A part of the OpenPOWER ecosystem

– Can participate with different vendors for bidding, installation and deployment process

Silver ISV Member for the OpenPOWER Consortium

Page 47: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 47Network Based Computing Laboratory

• Scalable distributed training is getting important

• Requires high-performance middleware designs while exploiting modern

interconnects

• Provided a set of different solutions to achieve scalable distributed

training

– Optimized collectives for CPU-based training

– CUDA-aware MPI with optimized collectives for GPU-based training

– TensorFlow-gRPC with RDMA support

– Efficient DL support over Big Data

• Will continue to enable the DL community to achieve scalability and

high-performance for their distributed training

Conclusions

Page 48: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 48Network Based Computing Laboratory

Funding Acknowledgments

Funding Support by

Equipment Support by

Page 49: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 49Network Based Computing Laboratory

Personnel AcknowledgmentsCurrent Students (Graduate)

– A. Awan (Ph.D.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)

– C.-H. Chu (Ph.D.)

– S. Guganani (Ph.D.)

Past Students

– A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– R. Biswas (M.S.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

– J. Zhang (Ph.D.)

Past Research Scientist

– K. Hamidouche

– S. Sur

Past Post-Docs

– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– M. Li (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– J. Hashmi (Ph.D.)

– A. Jain (Ph.D.)

– K. S. Khorassani (Ph.D.)

– P. Kousha (Ph.D.)

– D. Shankar (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Asst. Professor

– X. Lu

Past Programmers

– D. Bureddy

– J. Perkins

Current Research Specialist

– J. Smith

– S. Marcarelli

– J. Vienne

– H. Wang

Current Post-doc

– A. Ruhela

– K. Manian

Current Students (Undergraduate)

– V. Gangal (B.S.)

– M. Haupt (B.S.)

– N. Sarkauskas (B.S.)

– A. Yeretzian (B.S.)

Past Research Specialist

– M. Arnold

Current Research Scientist

– H. Subramoni

Page 50: Scalable and Distributed DNN Training on Modern HPC Systems · 2020-01-16 · Scalable and Distributed DNN Training on Modern HPC Systems Dhabaleswar K. (DK) Panda ... – Can we

HPCAC-Swiss (April ‘19) 50Network Based Computing Laboratory

Thank You!

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

[email protected]

The High-Performance MPI/PGAS Projecthttp://mvapich.cse.ohio-state.edu/

The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/

The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/