software libraries and middleware for exascale systems · 2017-07-19 · (hadoop, spark, hbase,...
TRANSCRIPT
Designing HPC, Big Data and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: [email protected]
http://www.cse.ohio-state.edu/~panda
Talk at US-China Workshop (PEARC ‘17)
by
PEARC ‘17 2Network Based Computing Laboratory
Big Data (Hadoop, Spark,
HBase, Memcached,
etc.)
Deep Learning(Caffe, TensorFlow, BigDL,
etc.)
HPC (MPI, RDMA, Lustre, etc.)
Increasing Usage of HPC, Big Data and Deep Learning
Convergence of HPC, Big Data, and Deep Learning!
Increasing Need to Run these applications on the Cloud!!
PEARC ‘17 3Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
Physical Compute
PEARC ‘17 4Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
PEARC ‘17 5Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
PEARC ‘17 6Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
Spark Job
Hadoop Job Deep LearningJob
PEARC ‘17 7Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
PEARC ‘17 8Network Based Computing Laboratory
Designing Communication Middleware for Multi-Petaflop and Exaflop Systems: Challenges
Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies(InfiniBand, 40/100GigE,
Aries, and OmniPath)
Multi/Many-coreArchitectures
Accelerators(NVIDIA and MIC)
MiddlewareCo-Design
Opportunities and
Challenges across Various
Layers
PerformanceScalability
Fault-Resilience
Communication Library or Runtime for Programming ModelsPoint-to-point
CommunicationCollective
CommunicationEnergy-
AwarenessSynchronization
and LocksI/O and
File SystemsFault
Tolerance
PEARC ‘17 9Network Based Computing Laboratory
Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,795 organizations in 85 countries
– More than 421,000 (> 0.4 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (June ‘17 ranking)• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 15th, 241,108-core (Pleiades) at NASA
• 20th, 462,462-core (Stampede) at TACC
• 44th, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->
– Stampede at TACC (12th in Jun’16, 462,462 cores, 5.168 Plops)
PEARC ‘17 10Network Based Computing Laboratory
0
50000
100000
150000
200000
250000
300000
350000
400000
450000Se
p-04
Feb-
05
Jul-0
5
Dec-
05
May
-06
Oct
-06
Mar
-07
Aug-
07
Jan-
08
Jun-
08
Nov
-08
Apr-
09
Sep-
09
Feb-
10
Jul-1
0
Dec-
10
May
-11
Oct
-11
Mar
-12
Aug-
12
Jan-
13
Jun-
13
Nov
-13
Apr-
14
Sep-
14
Feb-
15
Jul-1
5
Dec-
15
May
-16
Oct
-16
Mar
-17
Num
ber o
f Dow
nloa
ds
Timeline
MV
0.9.
4
MV2
0.9
.0
MV2
0.9
.8
MV2
1.0
MV
1.0
MV2
1.0.
3
MV
1.1
MV2
1.4
MV2
1.5
MV2
1.6
MV2
1.7
MV2
1.8
MV2
1.9 M
V22.
1
MV2
-GDR
2.0
b
MV2
-MIC
2.0
MV2
-Virt
2.1
rc2
MV2
-GDR
2.2
rc1
MV2
-X2.
2M
V22.
3a
MVAPICH2 Release Timeline and Downloads
PEARC ‘17 11Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface(MPI)
PGAS(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms
Point-to-point
Primitives
Collectives Algorithms
Energy-Awareness
Remote Memory Access
I/O andFile Systems
FaultTolerance
Virtualization Active Messages
Job StartupIntrospection
& Analysis
Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL), NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODPSR-IOV
Multi Rail
Transport MechanismsShared
Memory CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
PEARC ‘17 12Network Based Computing Laboratory
MVAPICH2 Software Family High-Performance Parallel Programming Libraries
MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs
MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud
MVAPICH2-EA Energy aware and High-performance MPI
MVAPICH2-MIC Optimized MPI for clusters with Intel KNC
Microbenchmarks
OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs
Tools
OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMT Utility to measure the energy consumption of MPI applications
PEARC ‘17 13Network Based Computing Laboratory
• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication– Scalable Start-up– Dynamic and Adaptive Communication Protocols and Tag Matching– Optimized Collectives using SHArP
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs• Optimized MVAPICH2 for KNL/Omni-Path, OpenPower, and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
PEARC ‘17 14Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
00.20.40.60.8
11.21.41.61.8 Small Message Latency
Message Size (bytes)
Late
ncy
(us)
1.111.19
1.011.15
1.04
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
0
20
40
60
80
100
120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-4-EDROmni-Path
Large Message Latency
Message Size (bytes)
Late
ncy
(us)
PEARC ‘17 15Network Based Computing Laboratory
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-4-EDROmni-Path
Bidirectional Bandwidth
Band
wid
th
(MBy
tes/
sec)
Message Size (bytes)
21,22712,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000 Unidirectional Bandwidth
Band
wid
th
(MBy
tes/
sec)
Message Size (bytes)
12,590
3,373
6,356
12,08312,366
PEARC ‘17 16Network Based Computing Laboratory
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64 128
256
512 1K 2K 4K 8K 16K
32K
64K
Tim
e Ta
ken
(Sec
onds
)
Number of Processes
MPI_Init & Hello World - Oakforest-PACS
Hello World (MVAPICH2-2.3a)
MPI_Init (MVAPICH2-2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available in MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
PEARC ‘17 17Network Based Computing Laboratory
Dynamic and Adaptive MPI Point-to-point Communication Protocols
Process on Node 1 Process on Node 2
Eager Threshold for Example Communication Pattern with Different Designs
0 1 2 3
4 5 6 7
Default
16 KB 16 KB 16 KB 16 KB
0 1 2 3
4 5 6 7
Manually Tuned
128 KB 128 KB 128 KB 128 KB
0 1 2 3
4 5 6 7
Dynamic + Adaptive
32 KB 64 KB 128 KB 32 KB
H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper
0
200
400
600
128 256 512 1K
Wal
l Clo
ck T
ime
(sec
onds
)
Number of Processes
Execution Time of Amber
Default Threshold=17K Threshold=64KThreshold=128K Dynamic Threshold
0
5
10
128 256 512 1K
Rela
tive
Mem
ory
Cons
umpt
ion
Number of Processes
Relative Memory Consumption of Amber
Default Threshold=17K Threshold=64KThreshold=128K Dynamic Threshold
Default Poor overlap; Low memory requirement Low Performance; High Productivity
Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity
Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity
Process Pair Eager Threshold (KB)
0 – 4 32
1 – 5 64
2 – 6 128
3 – 7 32
Desired Eager Threshold
PEARC ‘17 18Network Based Computing Laboratory
Dynamic and Adaptive Tag Matching
Normalized Total Tag Matching Time at 512 ProcessesNormalized to Default (Lower is Better)
Normalized Memory Overhead per Process at 512 ProcessesCompared to Default (Lower is Better)
Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]
Chal
leng
e Tag matching is a significant overhead for receiversExisting Solutions are- Static and do not adapt dynamically to communication pattern- Do not consider memory overhead
Solu
tion A new tag matching design
- Dynamically adapt to communication patterns- Use different strategies for different ranks - Decisions are based on the number of request object that must be traversed before hitting on the required one
Resu
lts Better performance than other state-of-the art tag-matching schemesMinimum memory consumption
Will be available in future MVAPICH2 releases
PEARC ‘17 19Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
(4,28) (8,28) (16,28)
Late
ncy
(sec
onds
)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP13%
Mesh Refinement Time of MiniAMR
Advanced Allreduce Collective Designs Using SHArP
12%
0
0.1
0.2
0.3
0.4
(4,28) (8,28) (16,28)
Late
ncy
(sec
onds
)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
Avg DDOT Allreduce time of HPCG
SHArP Support is available in MVAPICH2 2.3a
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, Supercomputing '17.
PEARC ‘17 20Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for KNL/Omni-Path, OpenPower, and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
PEARC ‘17 21Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF– Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu
PEARC ‘17 22Network Based Computing Laboratory
Application Level Performance with Graph500 and SortGraph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012
05
101520253035
4K 8K 16K
Tim
e (s
)
No. of Processes
MPI-SimpleMPI-CSCMPI-CSRHybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design• 8,192 processes
- 2.4X improvement over MPI-CSR- 7.6X improvement over MPI-Simple
• 16,384 processes- 1.5X improvement over MPI-CSR- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0500
10001500200025003000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Tim
e (s
econ
ds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort Application
• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min- Hybrid – 1172 sec; 0.36 TB/min- 51% improvement over MPI-design
PEARC ‘17 23Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs
– CUDA-aware MPI– GPUDirect RDMA (GDR) Support– Control Flow Decoupling through GPUDirect Async
• Optimized MVAPICH2 for KNL/Omni-Path, OpenPower, and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
PEARC ‘17 24Network Based Computing Laboratory
At Sender:
At Receiver:MPI_Recv(r_devbuf, size, …);
insideMVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
PEARC ‘17 25Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA-based inter-node point-to-point communication
(GPU-GPU, GPU-Host and Host-GPU)• High performance intra-node point-to-point communication for multi-GPU
adapters/node (GPU-GPU, GPU-Host and Host-GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point-to-point and collective communication from
GPU device buffers• Unified memory
PEARC ‘17 26Network Based Computing Laboratory
01000200030004000
1 4 16 64 256 1K 4K
MV2-GDR2.2MV2-GDR2.0bMV2 w/o GDR
GPU-GPU Internode Bi-Bandwidth
Message Size (bytes)
Bi-B
andw
idth
(MB/
s)
05
1015202530
0 2 8 32 128 512 2K
MV2-GDR2.2 MV2-GDR2.0bMV2 w/o GDR
GPU-GPU internode latency
Message Size (bytes)
Late
ncy
(us)
MVAPICH2-GDR-2.2Intel Ivy Bridge (E5-2680 v2) node - 20 cores
NVIDIA Tesla K40c GPUMellanox Connect-X4 EDR HCA
CUDA 8.0Mellanox OFED 3.0 with GPU-Direct-RDMA
10x2X
11x
Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)
2.18us0
50010001500200025003000
1 4 16 64 256 1K 4K
MV2-GDR2.2MV2-GDR2.0bMV2 w/o GDR
GPU-GPU Internode Bandwidth
Message Size (bytes)
Band
wid
th
(MB/
s) 11X
2X
3X
PEARC ‘17 27Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
Aver
age
Tim
e St
eps p
er
seco
nd (T
PS)
Number of Processes
MV2 MV2+GDR
0500
100015002000250030003500
4 8 16 32Aver
age
Tim
e St
eps p
er
seco
nd (T
PS)
Number of Processes
64K Particles 256K Particles
2X2X
PEARC ‘17 28Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96Nor
mal
ized
Exec
utio
n Ti
me
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
00.20.40.60.8
11.2
4 8 16 32
Nor
mal
ized
Exec
utio
n Ti
me
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content/tasks/operational/meteoSwiss/
PEARC ‘17 29Network Based Computing Laboratory
Latency oriented: Send+kernel and Recv+kernel
MVAPICH2-GDS: Preliminary Results
05
1015202530
8 32 128 512
DefaultEnhanced-GDS
Message Size (bytes)
Late
ncy
(us)
05
101520253035
16 64 256 1024
DefaultEnhanced-GDS
Message Size (bytes)
Late
ncy
(us)
Throughput Oriented: back-to-back
• Latency Oriented: Able to hide the kernel launch overhead– 25% improvement at 256 Bytes compared to default behavior
• Throughput Oriented: Asynchronously to offload queue the Communication and computation tasks– 14% improvement at 1KB message size
Intel Sandy Bridge, NVIDIA K20 and Mellanox FDR HCAWill be available in a public release soon
PEARC ‘17 30Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for KNL/Omni-Path, OpenPower, and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
PEARC ‘17 31Network Based Computing Laboratory
Enhanced Collectives to Exploit MCDRAM
• New designs to exploit high concurrency and MCDRAM of KNL
• Significant improvements for large message sizes
• Benefits seen in varying message size as well as varying MPI processes
Very Large Message Bi-directional Bandwidth16-process Intra-node All-to-AllIntra-node Broadcast with 64MB Message
0
2000
4000
6000
8000
10000
2M 4M 8M 16M 32M 64M
Band
wid
th (M
B/s)
Message size
MVAPICH2 MVAPICH2-Optimized
0
10000
20000
30000
40000
50000
60000
4 8 16
Late
ncy
(us)
No. of processes
MVAPICH2 MVAPICH2-Optimized
27%
0
50000
100000
150000
200000
250000
300000
1M 2M 4M 8M 16M 32M
Late
ncy
(us)
Message size
MVAPICH2 MVAPICH2-Optimized17.2%
52%
PEARC ‘17 32Network Based Computing Laboratory
Performance Benefits of Enhanced Designs
0
10000
20000
30000
40000
50000
60000
1M 2M 4M 8M 16M 32M 64M
Band
wid
th (M
B/s)
Message Size (bytes)
MV2_Opt_DRAM MV2_Opt_MCDRAMMV2_Def_DRAM MV2_Def_MCDRAM 30%
0
50
100
150
200
250
300
4:268 4:204 4:64
Tim
e (s
)
MPI Processes : OMP Threads
MV2_Def_DRAM MV2_Opt_DRAM
15%
Multi-Bandwidth using 32 MPI processesCNTK: MLP Training Time using MNIST (BS:64)
• Benefits observed on training time of Multi-level Perceptron (MLP) model on MNIST dataset using CNTK Deep Learning Framework
Enhanced Designs will be available in upcoming MVAPICH2 releases
PEARC ‘17 33Network Based Computing Laboratory
• Kernel-Assisted transfers (CMA, LiMIC, KNEM) offers single-copy transfers for large messages
• Scatter/Gather/Bcast etc. can be optimized with direct Kernel-Assisted transfers
• Applicable to Broadwell and OpenPOWER architectures as well
1.00
10.00
100.00
1000.00
10000.00
100000.00
1000000.00
1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M 2M 4M 8M
1 Node, 64 Processes
MV2-2.3a IMPI-2017
OMPI-2.1 Native CMA
Optimized Kernel-Assisted Collectives for Multi-/Many-Core
TACC Stampede2: 68 core Intel Knights Landing (KNL) 7250 @ 1.4 GHz, Intel Omni-Path HCA (100GBps), 16GB MCDRAM (Cache Mode)
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M 4M
2 Nodes, 128 Processes
MV2-2.3a IMPI-2017
OMPI-2.1 Native CMA
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
4 Nodes, 256 Processes
MV2-2.3a IMPI-2017
OMPI-2.1 Native CMA
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
8 Nodes, 512 Processes
MV2-2.3a IMPI-2017
OMPI-2.1 Native CMA
Performance of MPI_Gather
Message Size (Bytes)
Late
ncy
(us)
Late
ncy
(us)
S. Chakraborty, H. Subramoni, and D. K. Panda, IEEE Cluster ’17 (Accepted to be presented)
Enhanced Designs will be available in upcoming MVAPICH2 releases
PEARC ‘17 34Network Based Computing Laboratory
0
0.2
0.4
0.6
0.8
1
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2
Intra-node Point-to-Point Performance on OpenPower
0
10000
20000
30000
40000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K 16K
32K
64K
128K
512K 1M
Band
wid
th (M
B/s) MVAPICH2
0
20000
40000
60000
80000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M
Bidi
rect
iona
l Ban
dwid
th MVAPICH2
Platform: OpenPOWER (Power8-ppc64le) processor with 160 cores dual-socket CPU. Each socket contains 80 cores.
0
20
40
60
80
8K 16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
PEARC ‘17 35Network Based Computing Laboratory
0
1
2
3
4
0 1 2 4 8 16 32 64 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2
Inter-node Point-to-Point Performance on OpenPower
0
5000
10000
15000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M
Band
wid
th (M
B/s) MVAPICH2
0
10000
20000
30000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M
Bidi
rect
iona
l Ban
dwid
th MVAPICH2
Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA.
0
50
100
150
200
8K 16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
PEARC ‘17 36Network Based Computing Laboratory
05
101520
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE LATENCY (SMALL)
INTRA-SOCKET(NVLINK) INTER-SOCKET
MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)
010203040
1 4 16 64 256 1K 4K 16K
64K
256K 1M 4M
Band
wid
th (G
B/se
c)
Message Size (Bytes)
INTRA-NODE BANDWIDTH
INTRA-SOCKET(NVLINK) INTER-SOCKET
0
2
4
6
8
1 4 16 64 256 1K 4K 16K
64K
256K 1M 4M
Band
wid
th (G
B/se
c)
Message Size (Bytes)
INTER-NODE BANDWIDTH
Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and 4X-FDR InfiniBand Inter-connect
0
200
400
16K 32K 64K 128K256K512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE LATENCY (LARGE)
INTRA-SOCKET(NVLINK) INTER-SOCKET
0
10
20
30
40
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
INTER-NODE LATENCY (SMALL)
0200400600800
1000
Late
ncy
(us)
Message Size (Bytes)
INTER-NODE LATENCY (LARGE)
Intra-node Bandwidth: 32 GB/sec (NVLINK)Intra-node Latency: 16.8 us (without GPUDirectRDMA)
Inter-node Latency: 22 us (without GPUDirectRDMA) Inter-node Bandwidth: 6 GB/sec (FDR)Will be available in upcoming MVAPICH2-GDR
PEARC ‘17 37Network Based Computing Laboratory
0
1
2
3
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2
Intra-node Point-to-point Performance on ARMv8
0
1000
2000
3000
4000
5000
Band
wid
th (M
B/s) MVAPICH2
0
2000
4000
6000
8000
10000
Bidi
rect
iona
l Ban
dwid
th MVAPICH2
Platform: ARMv8 (aarch64) MIPS processor with 96 cores dual-socket CPU. Each socket contains 48 cores.
0
200
400
600
8K 16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
Will be available in
upcoming MVAPICH2
0.74 microsec (4 bytes)
PEARC ‘17 38Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
PEARC ‘17 39Network Based Computing Laboratory
How Can HPC Clusters with High-Performance Interconnect and Storage Architectures Benefit Big Data Applications?
Bring HPC and Big Data processing into a “convergent trajectory”!
What are the major bottlenecks in current Big
Data processing middleware (e.g. Hadoop, Spark, and Memcached)?
Can the bottlenecks be alleviated with new
designs by taking advantage of HPC
technologies?
Can RDMA-enabledhigh-performance
interconnectsbenefit Big Data
processing?
Can HPC Clusters with high-performance
storage systems (e.g. SSD, parallel file
systems) benefit Big Data applications?
How much performance benefits
can be achieved through enhanced
designs?
How to design benchmarks for evaluating the
performance of Big Data middleware on
HPC clusters?
PEARC ‘17 40Network Based Computing Laboratory
Designing Communication and I/O Libraries for Big Data Systems: Challenges
Big Data Middleware(HDFS, MapReduce, HBase, Spark and Memcached)
Networking Technologies(InfiniBand, 1/10/40/100 GigE
and Intelligent NICs)
Storage Technologies(HDD, SSD, NVM, and NVMe-
SSD)
Programming Models(Sockets)
Applications
Commodity Computing System Architectures
(Multi- and Many-core architectures and accelerators)
RDMA?
Communication and I/O LibraryPoint-to-Point
Communication
QoS & Fault Tolerance
Threaded Modelsand Synchronization
Performance TuningI/O and File Systems
Virtualization (SR-IOV)
Benchmarks
Upper level Changes?
PEARC ‘17 41Network Based Computing Laboratory
• RDMA for Apache Spark
• RDMA for Apache Hadoop 2.x (RDMA-Hadoop-2.x)– Plugins for Apache, Hortonworks (HDP) and Cloudera (CDH) Hadoop distributions
• RDMA for Apache HBase
• RDMA for Memcached (RDMA-Memcached)
• RDMA for Apache Hadoop 1.x (RDMA-Hadoop)
• OSU HiBD-Benchmarks (OHB)
– HDFS, Memcached, HBase, and Spark Micro-benchmarks
• http://hibd.cse.ohio-state.edu
• Users Base: 240 organizations from 30 countries
• More than 22,400 downloads from the project site
The High-Performance Big Data (HiBD) Project
Available for InfiniBand and RoCEAlso run on Ethernet
PEARC ‘17 42Network Based Computing Laboratory
050
100150200250300350400
80 120 160
Exec
utio
n Ti
me
(s)
Data Size (GB)
IPoIB (EDR)OSU-IB (EDR)
0100200300400500600700800
80 160 240
Exec
utio
n Ti
me
(s)
Data Size (GB)
IPoIB (EDR)OSU-IB (EDR)
Performance Numbers of RDMA for Apache Hadoop 2.x –RandomWriter & TeraGen in OSU-RI2 (EDR)
Cluster with 8 Nodes with a total of 64 maps
• RandomWriter– 3x improvement over IPoIB
for 80-160 GB file size
• TeraGen– 4x improvement over IPoIB for
80-240 GB file size
RandomWriter TeraGen
Reduced by 3x Reduced by 4x
PEARC ‘17 43Network Based Computing Laboratory
• InfiniBand FDR, SSD, 32/64 Worker Nodes, 768/1536 Cores, (768/1536M 768/1536R)
• RDMA vs. IPoIB with 768/1536 concurrent tasks, single SSD per node. – 32 nodes/768 cores: Total time reduced by 37% over IPoIB (56Gbps)
– 64 nodes/1536 cores: Total time reduced by 43% over IPoIB (56Gbps)
Performance Evaluation of RDMA-Spark on SDSC Comet – HiBench PageRank
32 Worker Nodes, 768 cores, PageRank Total Time 64 Worker Nodes, 1536 cores, PageRank Total Time
050
100150200250300350400450
Huge BigData Gigantic
Tim
e (s
ec)
Data Size (GB)
IPoIB
RDMA
0
100
200
300
400
500
600
700
800
Huge BigData Gigantic
Tim
e (s
ec)
Data Size (GB)
IPoIB
RDMA
43%37%
PEARC ‘17 44Network Based Computing Laboratory
Performance Evaluation on SDSC Comet: Astronomy Application
• Kira Toolkit1: Distributed astronomy image processing toolkit implemented using Apache Spark.
• Source extractor application, using a 65GB dataset from the SDSS DR2 survey that comprises 11,150 image files.
• Compare RDMA Spark performance with the standard apache implementation using IPoIB.
1. Z. Zhang, K. Barbary, F. A. Nothaft, E.R. Sparks, M.J. Franklin, D.A. Patterson, S. Perlmutter. Scientific Computing meets Big Data Technology: An Astronomy Use Case. CoRR, vol: abs/1507.03325, Aug 2015.
020406080
100120
RDMA Spark Apache Spark(IPoIB)
21 %
Execution times (sec) for Kira SE benchmark using 65 GB dataset, 48 cores.
M. Tatineni, X. Lu, D. J. Choi, A. Majumdar, and D. K. Panda, Experiences and Benefits of Running RDMA Hadoop and Spark on SDSC Comet, XSEDE’16, July 2016
PEARC ‘17 45Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
PEARC ‘17 46Network Based Computing Laboratory
• Deep Learning frameworks are a different game altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• How to address these newer requirements?– GPU-specific Communication Libraries (NCCL)
• NVidia's NCCL library provides inter-GPU communication
– CUDA-Aware MPI (MVAPICH2-GDR)• Provides support for GPU-based communication
• Can we exploit CUDA-Aware MPI and NCCL to support Deep Learning applications?
Deep Learning: New Challenges for MPI Runtimes
Hierarchical Communication (Knomial + NCCL ring)
PEARC ‘17 47Network Based Computing Laboratory
• NCCL has some limitations– Only works for a single node, thus, no scale-out on
multiple nodes
– Degradation across IOH (socket) for scale-up (within a node)
• We propose optimized MPI_Bcast– Communication of very large GPU buffers (order of
megabytes)
– Scale-out on large number of dense multi-GPU nodes
• Hierarchical Communication that efficiently exploits:– CUDA-Aware MPI_Bcast in MV2-GDR
– NCCL Broadcast primitive
Efficient Broadcast: MVAPICH2-GDR and NCCL
110
1001000
10000100000
1 8 64 512 4K 32K
256K 2M 16M
128M
Late
ncy
(us)
Log
Scal
e
Message Size
MV2-GDR MV2-GDR-Opt
100x
0
10
20
30
2 4 8 16 32 64Tim
e (s
econ
ds)
Number of GPUs
MV2-GDR MV2-GDR-Opt
Performance Benefits: Microsoft CNTK DL framework (25% avg. improvement )
Performance Benefits: OSU Micro-benchmarks
Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning, A. Awan , K. Hamidouche , A. Venkatesh , and D. K. Panda, The 23rd European MPI Users' Group Meeting (EuroMPI 16), Sep 2016 [Best Paper Runner-Up]
PEARC ‘17 48Network Based Computing Laboratory
0100200300
2 4 8 16 32 64 128
Late
ncy
(ms)
Message Size (MB)
Reduce – 192 GPUsLarge Message Optimized Collectives for Deep Learning
0
100
200
128 160 192
Late
ncy
(ms)
No. of GPUs
Reduce – 64 MB
0100200300
16 32 64
Late
ncy
(ms)
No. of GPUs
Allreduce - 128 MB
0
50
100
2 4 8 16 32 64 128
Late
ncy
(ms)
Message Size (MB)
Bcast – 64 GPUs
0
50
100
16 32 64
Late
ncy
(ms)
No. of GPUs
Bcast 128 MB
• MV2-GDR provides optimized collectives for large message sizes
• Optimized Reduce, Allreduce, and Bcast
• Good scaling with large number of GPUs
• Available in MVAPICH2-GDR 2.2GA
0100200300
2 4 8 16 32 64 128La
tenc
y (m
s)Message Size (MB)
Allreduce – 64 GPUs
PEARC ‘17 49Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning framework.
• Benefits and Weaknesses– Multi-GPU Training within a single node
– Performance degradation for GPUs across different sockets
– Limited Scale-out
• OSU-Caffe: MPI-based Parallel Training – Enable Scale-up (within a node) and Scale-out (across
multi-GPU nodes)
– Scale-out on 64 GPUs for training CIFAR-10 network on CIFAR-10 dataset
– Scale-out on 128 GPUs for training GoogLeNet network on ImageNet dataset
OSU-Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
Trai
ning
Tim
e (s
econ
ds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU-Caffe (1024) OSU-Caffe (2048)
Invalid use caseOSU-Caffe publicly available from
http://hidl.cse.ohio-state.edu/
PEARC ‘17 50Network Based Computing Laboratory
X. Lu, H. Shi, M. H. Javed, R. Biswas, and D. K. Panda, Characterizing Deep Learning over Big Data (DLoBD) Stacks on RDMA-capable Networks, HotI 2017.
High-Performance Deep Learning over Big Data (DLoBD) Stacks• Benefits of Deep Learning over Big Data (DLoBD)
Easily integrate deep learning components into Big Data processing workflow
Easily access the stored data in Big Data systems No need to set up new dedicated deep learning clusters;
Reuse existing big data analytics clusters • Challenges
Can RDMA-based designs in DLoBD stacks improve performance, scalability, and resource utilization on high-performance interconnects, GPUs, and multi-core CPUs?
What are the performance characteristics of representative DLoBD stacks on RDMA networks?
• Characterization on DLoBD Stacks CaffeOnSpark, TensorFlowOnSpark, and BigDL IPoIB vs. RDMA; In-band communication vs. Out-of-band
communication; CPU vs. GPU; etc. Performance, accuracy, scalability, and resource utilization RDMA-based DLoBD stacks (e.g., BigDL over RDMA-Spark)
can achieve 2.6x speedup compared to the IPoIB based scheme, while maintain similar accuracy
0
20
40
60
10
1010
2010
3010
4010
5010
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Acc
urac
y (%
)
Epoc
hs T
ime
(sec
s)
Epoch Number
IPoIB-TimeRDMA-TimeIPoIB-AccuracyRDMA-Accuracy
2.6X
PEARC ‘17 51Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
PEARC ‘17 52Network Based Computing Laboratory
• Virtualization has many benefits– Fault-tolerance– Job migration– Compaction
• Have not been very popular in HPC due to overhead associated with Virtualization
• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR-IOV• MVAPICH2-Virt 2.2 supports:
– OpenStack, Docker, and singularity
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
PEARC ‘17 53Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
Exec
utio
n Ti
me
(s)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
1%9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
Exec
utio
n Ti
me
(ms)
Problem Size (Scale, Edgefactor)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2-5% overhead for Graph500 with 128 Procs
• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs
Application-Level Performance on Chameleon
SPEC MPI2007Graph500
5%
PEARC ‘17 54Network Based Computing Laboratory
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
Exec
utio
n Ti
me
(s)
Container-Def
Container-Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container-Def, up to 11% and 73% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 5% overhead for NAS and Graph 500
Application-Level Performance on Docker with MVAPICH2
Graph 500NAS
11%
0
50
100
150
200
250
300
1Cont*16P 2Conts*8P 4Conts*4P
BFS
Exec
utio
n Ti
me
(ms)
Scale, Edgefactor (20,16)
Container-Def
Container-Opt
Native
73%
PEARC ‘17 55Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
22,16 22,20 24,16 24,20 26,16 26,20
BFS
Exec
utio
n Ti
me
(ms)
Problem Size (Scale, Edgefactor)
Graph500
Singularity
Native
0
50
100
150
200
250
300
CG EP FT IS LU MG
Exec
utio
n Ti
me
(s)
NPB Class D
Singularity
Native
• 512 Processes across 32 nodes
• Less than 7% and 6% overhead for NPB and Graph500, respectively
Application-Level Performance on Singularity with MVAPICH2
7%
6%
PEARC ‘17 56Network Based Computing Laboratory
RDMA-Hadoop with Virtualization
– 14% and 24% improvement with Default Mode for CloudBurst and Self-Join
– 30% and 55% improvement with Distributed Mode for CloudBurst and Self-Join
0
20
40
60
80
100
Default Mode Distributed Mode
EXEC
UTI
ON
TIM
ECloudBurst
RDMA-Hadoop RDMA-Hadoop-Virt
050
100150200250300350400
Default Mode Distributed Mode
EXEC
UTI
ON
TIM
E
Self-Join
RDMA-Hadoop RDMA-Hadoop-Virt
30% reduction
55% reduction
S. Gugnani, X. Lu, D. K. Panda. Designing Virtualization-aware and Automatic Topology Detection Schemes for Accelerating Hadoop on SR-IOV-enabled Clouds. CloudCom, 2016.
PEARC ‘17 57Network Based Computing Laboratory
• Next generation HPC systems need to be designed with a holistic view of Big Data, Deep Learning and Cloud
• Presented some of the approaches and results along these directions
• Enable Big Data, Deep Learning and Cloud community to take advantage of modern HPC technologies
• Many other open issues need to be solved
• Next generation HPC professionals need to be trained along these directions
• Looking forward to collaboration with supercomputing centers in US and China
Concluding Remarks
PEARC ‘17 58Network Based Computing Laboratory
Funding AcknowledgmentsFunding Support by
Equipment Support by
PEARC ‘17 59Network Based Computing Laboratory
Personnel AcknowledgmentsCurrent Students
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.-H. Chu (Ph.D.)
Past Students – A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist– K. Hamidouche
– S. Sur
Past Post-Docs– D. Banerjee
– X. Besseron
– H.-W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– S. Guganani (Ph.D.)
– J. Hashmi (Ph.D.)
– N. Islam (Ph.D.)
– M. Li (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists– X. Lu
– H. Subramoni
Past Programmers– D. Bureddy
– J. Perkins
Current Research Specialist– J. Smith
– M. Arnold
– M. Rahman (Ph.D.)
– D. Shankar (Ph.D.)
– A. Venkatesh (Ph.D.)
– J. Zhang (Ph.D.)
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post-doc– A. Ruhela
PEARC ‘17 60Network Based Computing Laboratory
Upcoming 5th Annual MVAPICH User Group (MUG) Meeting• August 14-16, 2017; Columbus, Ohio, USA• Speakers (Confirmed so far):
– Keynote: Dan Stanzione (TACC)
– Keynote: Ron Brightwell (Sandia National Lab)
– Gene Cooperman (North Eastern University)
– Hyun-Wook Jin (Konkuk University, South Korea)
– Allen Malony (University of Oregon)
– Adam Moody (Lawrence Livermore National Laboratory)
– Hoon Ryu (Korea Institute of Science and Technology Information - KISTI, South Korea)
– Karl Schulz (Intel)
– Gilad Shainer (Mellanox)
– Pavel Shamis (ARM)
– Filippo Spiga (University of Cambridge, UK)
– Sayantan Sur (Intel)
– Mahidhar Tatineni (San Diego Supercomputing Center)
– Craig Tierney (NVIDIA)
– Abhinav Vishnu (Pacific Northwest National Laboratory)
– Hao Wang (Virginia Tech)
– Carsten Weinhold (TU Dresden, Germany)
• Tutorials:– Intel
– Mellanox
– NVIDIA
– ARM
• More details at: http://mug.mvapich.cse.ohio-state.edu
PEARC ‘17 61Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/
The High-Performance MPI/PGAS Projecthttp://mvapich.cse.ohio-state.edu/
The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/
The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/