designing efficient hpc, big data, deep learning, and for ... · 11/2/2017 · designing efficient...
TRANSCRIPT
Designing Efficient HPC, Big Data, Deep Learning, and Cloud Computing Middleware for Exascale Systems
Dhabaleswar K. (DK) Panda
The Ohio State University
E‐mail: [email protected]‐state.edu
http://www.cse.ohio‐state.edu/~panda
Keynote Talk at HPCAC‐China Conference
by
HPCAC‐China ‘17 2Network Based Computing Laboratory
Big Data (Hadoop, Spark,
HBase, Memcached,
etc.)
Big Data (Hadoop, Spark,
HBase, Memcached,
etc.)
Deep Learning(Caffe, TensorFlow, BigDL,
etc.)
Deep Learning(Caffe, TensorFlow, BigDL,
etc.)
HPC (MPI, RDMA, Lustre, etc.)
HPC (MPI, RDMA, Lustre, etc.)
Increasing Usage of HPC, Big Data and Deep Learning
Convergence of HPC, Big Data, and Deep Learning!
Increasing Need to Run these applications on the Cloud!!
HPCAC‐China ‘17 3Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
Physical ComputePhysical Compute
HPCAC‐China ‘17 4Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
HPCAC‐China ‘17 5Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
HPCAC‐China ‘17 6Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
Spark Job
Hadoop Job Deep LearningJob
HPCAC‐China ‘17 7Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR‐IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
HPCAC‐China ‘17 8Network Based Computing Laboratory
Supporting Programming Models for Multi‐Petaflop and Exaflop Systems: Challenges
Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies(InfiniBand, 40/100GigE, Aries, and Omni‐Path)
Multi‐/Many‐coreArchitectures
Accelerators(GPU and FPGA)
MiddlewareCo‐Design
Opportunities and
Challenges across Various
Layers
PerformanceScalabilityResilience
Communication Library or Runtime for Programming ModelsPoint‐to‐point Communication
Collective Communication
Energy‐Awareness
Synchronization and Locks
I/O andFile Systems
FaultTolerance
HPCAC‐China ‘17 9Network Based Computing Laboratory
Overview of the MVAPICH2 Project• High Performance open‐source MPI Library for InfiniBand, Omni‐Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI‐1), MVAPICH2 (MPI‐2.2 and MPI‐3.0), Started in 2001, First version available in 2002
– MVAPICH2‐X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2‐GDR) and MIC (MVAPICH2‐MIC), Available since 2014
– Support for Virtualization (MVAPICH2‐Virt), Available since 2015
– Support for Energy‐Awareness (MVAPICH2‐EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,825 organizations in 85 countries
– More than 429,000 (> 0.4 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (June ‘17 ranking)• 1st, 10,649,600‐core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 15th, 241,108‐core (Pleiades) at NASA
• 20th, 462,462‐core (Stampede) at TACC
• 44th, 74,520‐core (Tsubame 2.5) at Tokyo Institute of Technology
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio‐state.edu
• Empowering Top500 systems for over a decade
– System‐X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ‐>
– Sunway TaihuLight (1st in Jun’17, 10M cores, 100 PFlops)
HPCAC‐China ‘17 10Network Based Computing Laboratory
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
Sep‐04
Feb‐05
Jul‐0
5
Dec‐05
May‐06
Oct‐06
Mar‐07
Aug‐07
Jan‐08
Jun‐08
Nov
‐08
Apr‐09
Sep‐09
Feb‐10
Jul‐1
0
Dec‐10
May‐11
Oct‐11
Mar‐12
Aug‐12
Jan‐13
Jun‐13
Nov
‐13
Apr‐14
Sep‐14
Feb‐15
Jul‐1
5
Dec‐15
May‐16
Oct‐16
Mar‐17
Aug‐17
Num
ber o
f Dow
nloa
ds
Timeline
MV 0.9.4
MV2
0.9.0
MV2
0.9.8
MV2
1.0
MV1.0
MV2
1.0.3
MV1.1
MV2
1.4
MV2
1.5
MV2
1.6
MV2
1.7
MV2
1.8
MV2
1.9 M
V22.1
MV2
‐GDR 2.0b
MV2
‐MIC 2.0
MV2
‐Virt 2.2
MV2
‐GDR 2.2rc1
MV2
‐X2.2
MV2
MVAPICH2 Release Timeline and Downloads
HPCAC‐China ‘17 11Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface(MPI)
PGAS(UPC, OpenSHMEM, CAF, UPC++)
Hybrid ‐‐‐MPI + X(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms
Point‐to‐point
Primitives
Collectives Algorithms
Energy‐Awareness
Remote Memory Access
I/O andFile Systems
FaultTolerance
Virtualization Active Messages
Job StartupIntrospection & Analysis
Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni‐Path)
Support for Modern Multi‐/Many‐core Architectures(Intel‐Xeon, OpenPower, Xeon‐Phi, ARM, NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODPSR‐IOV
Multi Rail
Transport MechanismsShared Memory
CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
XPMEM*
HPCAC‐China ‘17 12Network Based Computing Laboratory
• Scalability for million to billion processors– Support for highly‐efficient inter‐node and intra‐node communication– Scalable Start‐up– Optimized Collectives using SHArP and Multi‐Leaders– Optimized CMA‐based Collectives
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China ‘17 13Network Based Computing Laboratory
One‐way Latency: MPI over IB with MVAPICH2
00.20.40.60.81
1.21.41.61.8 Small Message Latency
Message Size (bytes)
Latency (us)
1.111.19
1.011.15
1.04
TrueScale‐QDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switchConnectX‐3‐FDR ‐ 2.8 GHz Deca‐core (IvyBridge) Intel PCI Gen3 with IB switchConnectIB‐Dual FDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switchConnectX‐4‐EDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB Switch
Omni‐Path ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with Omni‐Path switch
0
20
40
60
80
100
120TrueScale‐QDRConnectX‐3‐FDRConnectIB‐DualFDRConnectX‐4‐EDROmni‐Path
Large Message Latency
Message Size (bytes)
Latency (us)
HPCAC‐China ‘17 14Network Based Computing Laboratory
TrueScale‐QDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switchConnectX‐3‐FDR ‐ 2.8 GHz Deca‐core (IvyBridge) Intel PCI Gen3 with IB switchConnectIB‐Dual FDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switch
ConnectX‐4‐EDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 IB switchOmni‐Path ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with Omni‐Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000TrueScale‐QDRConnectX‐3‐FDRConnectIB‐DualFDRConnectX‐4‐EDROmni‐Path
Bidirectional Bandwidth
Band
width
(MBy
tes/sec)
Message Size (bytes)
21,227
12,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000 Unidirectional Bandwidth
Band
width
(MBy
tes/sec)
Message Size (bytes)
12,590
3,373
6,356
12,08312,366
HPCAC‐China ‘17 15Network Based Computing Laboratory
Startup Performance on KNL + Omni‐Path
0
5
10
15
20
25
64 128
256
512 1K 2K 4K 8K 16K
32K
64K
Time Taken (Secon
ds)
Number of Processes
MPI_Init & Hello World ‐ Oakforest‐PACS
Hello World (MVAPICH2‐2.3a)
MPI_Init (MVAPICH2‐2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest‐PACS)• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available in MVAPICH2‐2.3a and as patch for SLURM‐15.08.8 and SLURM‐16.05.1
HPCAC‐China ‘17 16Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
(4,28) (8,28) (16,28)
Latency (secon
ds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2‐SHArP13%
Mesh Refinement Time of MiniAMR
Advanced Allreduce Collective Designs Using SHArP
12%
0
0.1
0.2
0.3
0.4
(4,28) (8,28) (16,28)
Latency (secon
ds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2‐SHArP
Avg DDOT Allreduce time of HPCG
SHArP Support is available in MVAPICH2 2.3a
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning‐based Multi‐Leader Design, SuperComputing '17.
HPCAC‐China ‘17 17Network Based Computing Laboratory
Performance of MPI_Allreduce On Stampede2 (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Latency (us)
Message SizeMVAPICH2 MVAPICH2‐OPT IMPI
0200400600800100012001400160018002000
8K 16K 32K 64K 128K 256KMessage Size
MVAPICH2 MVAPICH2‐OPT IMPI
OSU Micro Benchmark 64 PPN
2.4X
• For MPI_Allreduce latency with 32K bytes, MVAPICH2‐OPT can reduce the latency by 2.4XM. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning‐based Multi‐Leader Design, SuperComputing '17.
HPCAC‐China ‘17 18Network Based Computing Laboratory
Enhanced MPI_Bcast with Optimized CMA‐based Design
1
10
100
1000
10000
100000
1K 4K 16K 64K 256K 1M 4MMessage Size
KNL (64 Processes)
MVAPICH2‐2.3a
Intel MPI 2017
OpenMPI 2.1.0
Proposed
Latency (us)
1
10
100
1000
10000
100000
Message Size
Broadwell (28 Processes)
MVAPICH2‐2.3a
Intel MPI 2017
OpenMPI 2.1.0
Proposed1
10
100
1000
10000
100000
1K 4K 16K 64K 256K 1M 4MMessage Size
Power8 (160 Processes)
MVAPICH2‐2.3a
OpenMPI 2.1.0
Proposed
• Up to 2x ‐ 4x improvement over existing implementation for 1MB messages• Up to 1.5x – 2x faster than Intel MPI and Open MPI for 1MB messages
Use CMA
UseSHMEM
Use CMA
UseSHMEM
Use CMA
UseSHMEM
• Improvements obtained for large messages only• p‐1 copies with CMA, p copies with Shared memory• Fallback to SHMEM for small messages
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel‐Assisted MPI Collectives for Multi/Many‐core Systems, IEEE Cluster ’17, BEST Paper Finalist
HPCAC‐China ‘17 19Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China ‘17 20Network Based Computing Laboratory
MVAPICH2‐X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF– Available with since 2012 (starting with MVAPICH2‐X 1.9) – http://mvapich.cse.ohio‐state.edu
HPCAC‐China ‘17 21Network Based Computing Laboratory
Application Level Performance with Graph500 and SortGraph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012
05101520253035
4K 8K 16K
Time (s)
No. of Processes
MPI‐SimpleMPI‐CSCMPI‐CSRHybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design• 8,192 processes
‐ 2.4X improvement over MPI‐CSR‐ 7.6X improvement over MPI‐Simple
• 16,384 processes‐ 1.5X improvement over MPI‐CSR‐ 13X improvement over MPI‐Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0500
10001500200025003000
500GB‐512 1TB‐1K 2TB‐2K 4TB‐4K
Time (secon
ds)
Input Data ‐ No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort Application
• 4,096 processes, 4 TB Input Size‐MPI – 2408 sec; 0.16 TB/min‐ Hybrid – 1172 sec; 0.36 TB/min‐ 51% improvement over MPI‐design
HPCAC‐China ‘17 22Network Based Computing Laboratory
Optimized OpenSHMEM with AVX and MCDRAM: Application Kernels Evaluation
Heat Image Kernel
• On heat diffusion based kernels AVX‐512 vectorization showed better performance• MCDRAM showed significant benefits on Heat‐Image kernel for all process counts.
Combined with AVX‐512 vectorization, it showed up to 4X improved performance
1
10
100
1000
16 32 64 128
Time (s)
No. of processes
KNL (Default)KNL (AVX‐512)KNL (AVX‐512+MCDRAM)Broadwell
Heat‐2D Kernel using Jacobi method
0.1
1
10
100
16 32 64 128
Time (s)
No. of processes
KNL (Default)KNL (AVX‐512)KNL (AVX‐512+MCDRAM)Broadwell
HPCAC‐China ‘17 23Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs
– CUDA‐aware MPI– GPUDirect RDMA (GDR) Support– Control Flow Decoupling through GPUDirect Async– Optimized GDR Support
• Optimized MVAPICH2 for OpenPower and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China ‘17 24Network Based Computing Laboratory
At Sender:
At Receiver:MPI_Recv(r_devbuf, size, …);
insideMVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU‐Aware (CUDA‐Aware) MPI Library: MVAPICH2‐GPU
HPCAC‐China ‘17 25Network Based Computing Laboratory
CUDA‐Aware MPI: MVAPICH2‐GDR 1.8‐2.2 Releases• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA‐based inter‐node point‐to‐point communication
(GPU‐GPU, GPU‐Host and Host‐GPU)• High performance intra‐node point‐to‐point communication for multi‐GPU
adapters/node (GPU‐GPU, GPU‐Host and Host‐GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra‐node
communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point‐to‐point and collective communication from
GPU device buffers• Unified memory
HPCAC‐China ‘17 26Network Based Computing Laboratory
01000200030004000
1 4 16 64 256 1K 4K
MV2‐GDR2.2MV2‐GDR2.0bMV2 w/o GDR
GPU‐GPU Internode Bi‐Bandwidth
Message Size (bytes)
Bi‐Ban
dwidth (M
B/s)
051015202530
0 2 8 32 128 512 2K
MV2‐GDR2.2 MV2‐GDR2.0bMV2 w/o GDR
GPU‐GPU internode latency
Message Size (bytes)
Latency (us)
MVAPICH2‐GDR‐2.2Intel Ivy Bridge (E5‐2680 v2) node ‐ 20 cores
NVIDIA Tesla K40c GPUMellanox Connect‐X4 EDR HCA
CUDA 8.0Mellanox OFED 3.0 with GPU‐Direct‐RDMA
10x2X
11x
Performance of MVAPICH2‐GPU with GPU‐Direct RDMA (GDR)
2.18us0
50010001500200025003000
1 4 16 64 256 1K 4K
MV2‐GDR2.2MV2‐GDR2.0bMV2 w/o GDR
GPU‐GPU Internode Bandwidth
Message Size (bytes)
Band
width
(MB/s) 11X
2X
3X
HPCAC‐China ‘17 27Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application‐Level Evaluation (HOOMD‐blue)
0
500
1000
1500
2000
2500
4 8 16 32
Average Time Step
s per
second
(TPS)
Number of Processes
MV2 MV2+GDR
0500100015002000250030003500
4 8 16 32Average Time Step
s per
second
(TPS)
Number of Processes
64K Particles 256K Particles
2X2X
HPCAC‐China ‘17 28Network Based Computing Laboratory
Application‐Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96Normalized
Executio
n Time
Number of GPUs
CSCS GPU cluster
Default Callback‐based Event‐based
00.20.40.60.81
1.2
4 8 16 32
Normalized
Executio
n Time
Number of GPUs
Wilkes GPU Cluster
Default Callback‐based Event‐based
• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non‐Contiguous Data Movement Processing on Modern GPU‐enabled Systems, IPDPS’16
On‐going collaboration with CSCS and MeteoSwiss (Switzerland) in co‐designing MV2‐GDR and Cosmo Application
Cosmo model: http://www2.cosmo‐model.org/content/tasks/operational/meteoSwiss/
HPCAC‐China ‘17 29Network Based Computing Laboratory
Latency oriented: Kernel+Send and Recv+Kernel
MVAPICH2‐GDS: Preliminary Results Overlap with host computation/communication
• Latency Oriented: Able to hide the kernel launch overhead– 8‐15% improvement compared to default behavior
• Overlap: Asynchronously to offload queue the Communication and computation tasks– 89% overlap with host computation at 128‐Byte message size
Intel Sandy Bridge, NVIDIA Tesla K40c and Mellanox FDR HCACUDA 8.0, OFED 3.4, Each kernel is ~50us
Will be available in a public release soon
0
20
40
60
80
1 4 16 64 256 1K 4K
Default MPI Enhaced MPI+GDS
Message Size (bytes)
Latency (us)
020406080100
1 4 16 64 256 1K 4K
Default MPI Enhanced MPI+GDS
Message Size (bytes)
Overla
p (%
)
HPCAC‐China ‘17 30Network Based Computing Laboratory
0
2000
4000
6000
1 2 4 8 16 32 641282565121K 2K 4KBand
width (M
B/s)
Message Size (Bytes)
GPU‐GPU Inter‐node Bi‐Bandwidth
MV2‐(NO‐GDR) MV2‐GDR‐2.3a
0
1000
2000
3000
4000
1 2 4 8 16 32 641282565121K 2K 4KBand
width (M
B/s)
Message Size (Bytes)
GPU‐GPU Inter‐node Bandwidth
MV2‐(NO‐GDR) MV2‐GDR‐2.3a
0
10
20
30
0 1 2 4 8 16 32 64 128256512 1K 2K 4K 8K
Latency (us)
Message Size (Bytes)
GPU‐GPU Inter‐node Latency
MV2‐(NO‐GDR) MV2‐GDR‐2.3a
MVAPICH2‐GDR‐2.3a (upcoming)Intel Haswell (E5‐2687W) node ‐ 20 cores
NVIDIA Pascal P100 GPUMellanox Connect‐X5 EDR HCA
CUDA 8.0Mellanox OFED 4.0 with GPU‐Direct‐RDMA
10x
9x
Optimized MVAPICH2‐GDR Design
1.91us
11X
HPCAC‐China ‘17 31Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• Optimized MVAPICH2 for OpenPower and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China ‘17 32Network Based Computing Laboratory
0
0.2
0.4
0.6
0.8
1
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Latency (us)
MVAPICH2
Intra‐node Point‐to‐Point Performance on OpenPower
0
10000
20000
30000
40000
1 2 4 8 16 32 64 128
256
512
1K 2K 4K 8K 16K
32K
64K
128K
512K 1M
Band
width (M
B/s) MVAPICH2
0
20000
40000
60000
80000
1 2 4 8 16 32 64 128
256
512
1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M
Bidirectional Bandw
idth MVAPICH2
Platform: OpenPOWER (Power8‐ppc64le) processor with 160 cores dual‐socket CPU. Each socket contains 80 cores.
0
20
40
60
80
8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency (us)
MVAPICH2
Small Message Latency Large Message Latency
Bi‐directional BandwidthBandwidth
HPCAC‐China ‘17 33Network Based Computing Laboratory
0
1
2
3
4
0 1 2 4 8 16 32 64 256 512 1K 2K 4K
Latency (us)
MVAPICH2
Inter‐node Point‐to‐Point Performance on OpenPower
0
5000
10000
15000
1 2 4 8 16 32 64 128
256
512
1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M
Band
width (M
B/s) MVAPICH2
0
10000
20000
30000
1 2 4 8 16 32 64 128
256
512
1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M
Bidirectional Bandw
idth MVAPICH2
Platform: Two nodes of OpenPOWER (Power8‐ppc64le) CPU using Mellanox EDR (MT4115) HCA.
0
50
100
150
200
8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency (us)
MVAPICH2
Small Message Latency Large Message Latency
Bi‐directional BandwidthBandwidth
HPCAC‐China ‘17 34Network Based Computing Laboratory
05
101520
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K
Latency (us)
Message Size (Bytes)
INTRA‐NODE LATENCY (SMALL)
INTRA‐SOCKET(NVLINK) INTER‐SOCKET
MVAPICH2‐GDR: Performance on OpenPOWER (NVLink + Pascal)
010203040
1 4 16 64 256 1K 4K 16K
64K
256K 1M 4M
Band
width (G
B/sec)
Message Size (Bytes)
INTRA‐NODE BANDWIDTH
INTRA‐SOCKET(NVLINK) INTER‐SOCKET
0
2
4
6
8
1 4 16 64 256 1K 4K 16K
64K
256K 1M 4M
Band
width (G
B/sec)
Message Size (Bytes)
INTER‐NODE BANDWIDTH
Platform: OpenPOWER (ppc64le) nodes equipped with a dual‐socket CPU, 4 Pascal P100‐SXM GPUs, and 4X‐FDR InfiniBand Inter‐connect
0
200
400
16K 32K 64K 128K256K512K 1M 2M 4M
Latency (us)
Message Size (Bytes)
INTRA‐NODE LATENCY (LARGE)
INTRA‐SOCKET(NVLINK) INTER‐SOCKET
0
10
20
30
40
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K
Latency (us)
Message Size (Bytes)
INTER‐NODE LATENCY (SMALL)
0200400600800
1000
Latency (us)
Message Size (Bytes)
INTER‐NODE LATENCY (LARGE)
Intra‐node Bandwidth: 32 GB/sec (NVLINK)Intra‐node Latency: 16.8 us (without GPUDirectRDMA)
Inter‐node Latency: 22 us (without GPUDirectRDMA) Inter‐node Bandwidth: 6 GB/sec (FDR)Will be available in upcoming MVAPICH2‐GDR
HPCAC‐China ‘17 35Network Based Computing Laboratory
0
1
2
3
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Latency (us)
MVAPICH2
Intra‐node Point‐to‐point Performance on ARMv8
0
1000
2000
3000
4000
5000
Band
width (M
B/s) MVAPICH2
0
2000
4000
6000
8000
10000
Bidirectional Bandw
idth MVAPICH2
Platform: ARMv8 (aarch64) MIPS processor with 96 cores dual‐socket CPU. Each socket contains 48 cores.
0
200
400
600
8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency (us)
MVAPICH2
Small Message Latency Large Message Latency
Bi‐directional BandwidthBandwidth
Will be available in
upcoming MVAPICH2
0.74 microsec (4 bytes)
HPCAC‐China ‘17 36Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR‐IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
HPCAC‐China ‘17 37Network Based Computing Laboratory
How Can HPC Clusters with High‐Performance Interconnect and Storage Architectures Benefit Big Data Applications?
Bring HPC and Big Data processing into a “convergent trajectory”!
What are the major What are the major bottlenecks in current Big
Data processing middleware (e.g. Hadoop, Spark, and Memcached)?
Can the bottlenecks be alleviated with new designs by taking advantage of HPC technologies?
Can RDMA‐enabledhigh‐performance interconnectsbenefit Big Data processing?
Can HPC Clusters with high‐performance
storage systems (e.g. SSD, parallel file
systems) benefit Big Data applications?
How much performance benefits
can be achieved through enhanced
designs?
How to design How to design benchmarks for evaluating the
performance of Big Data middleware on
HPC clusters?
HPCAC‐China ‘17 38Network Based Computing Laboratory
Designing Communication and I/O Libraries for Big Data Systems: Challenges
Big Data Middleware(HDFS, MapReduce, HBase, Spark and Memcached)
Networking Technologies(InfiniBand, 1/10/40/100 GigE
and Intelligent NICs)
Storage Technologies(HDD, SSD, NVM, and NVMe‐
SSD)
Programming Models(Sockets)
Applications
Commodity Computing System Architectures
(Multi‐ and Many‐core architectures and accelerators)
RDMA?
Communication and I/O LibraryPoint‐to‐PointCommunication
QoS & Fault Tolerance
Threaded Modelsand Synchronization
Performance TuningI/O and File Systems
Virtualization (SR‐IOV)
Benchmarks
Upper level Changes?Upper level Changes?
HPCAC‐China ‘17 39Network Based Computing Laboratory
• RDMA for Apache Spark
• RDMA for Apache Hadoop 2.x (RDMA‐Hadoop‐2.x)– Plugins for Apache, Hortonworks (HDP) and Cloudera (CDH) Hadoop distributions
• RDMA for Apache HBase
• RDMA for Memcached (RDMA‐Memcached)
• RDMA for Apache Hadoop 1.x (RDMA‐Hadoop)
• OSU HiBD‐Benchmarks (OHB)
– HDFS, Memcached, HBase, and Spark Micro‐benchmarks
• http://hibd.cse.ohio‐state.edu
• Users Base: 250 organizations from 31 countries
• More than 23,500 downloads from the project site
The High‐Performance Big Data (HiBD) Project
Available for InfiniBand and RoCEAlso run on Ethernet
HPCAC‐China ‘17 40Network Based Computing Laboratory
050100150200250300350400
80 120 160
Execution Time (s)
Data Size (GB)
IPoIB (EDR)OSU‐IB (EDR)
0100200300400500600700800
80 160 240
Execution Time (s)
Data Size (GB)
IPoIB (EDR)OSU‐IB (EDR)
Performance Numbers of RDMA for Apache Hadoop 2.x –RandomWriter & TeraGen in OSU‐RI2 (EDR)
Cluster with 8 Nodes with a total of 64 maps
• RandomWriter– 3x improvement over IPoIB
for 80‐160 GB file size
• TeraGen– 4x improvement over IPoIB for
80‐240 GB file size
RandomWriter TeraGen
Reduced by 3x Reduced by 4x
HPCAC‐China ‘17 41Network Based Computing Laboratory
• InfiniBand FDR, SSD, 32/64 Worker Nodes, 768/1536 Cores, (768/1536M 768/1536R)
• RDMA vs. IPoIB with 768/1536 concurrent tasks, single SSD per node. – 32 nodes/768 cores: Total time reduced by 37% over IPoIB (56Gbps)
– 64 nodes/1536 cores: Total time reduced by 43% over IPoIB (56Gbps)
Performance Evaluation of RDMA‐Spark on SDSC Comet – HiBench PageRank
32 Worker Nodes, 768 cores, PageRank Total Time 64 Worker Nodes, 1536 cores, PageRank Total Time
050
100150200250300350400450
Huge BigData Gigantic
Time (sec)
Data Size (GB)
IPoIB
RDMA
0
100
200
300
400
500
600
700
800
Huge BigData Gigantic
Time (sec)
Data Size (GB)
IPoIB
RDMA
43%37%
HPCAC‐China ‘17 42Network Based Computing Laboratory
Performance Evaluation on SDSC Comet: Astronomy Application
• Kira Toolkit1: Distributed astronomy image processing toolkit implemented using Apache Spark.
• Source extractor application, using a 65GB dataset from the SDSS DR2 survey that comprises 11,150 image files.
• Compare RDMA Spark performance with the standard apache implementation using IPoIB.
1. Z. Zhang, K. Barbary, F. A. Nothaft, E.R. Sparks, M.J. Franklin, D.A. Patterson, S. Perlmutter. Scientific Computing meets Big Data Technology: An Astronomy Use Case. CoRR, vol: abs/1507.03325, Aug 2015.
020406080100120
RDMA Spark Apache Spark(IPoIB)
21 %
Execution times (sec) for Kira SE benchmark using 65 GB dataset, 48 cores.
M. Tatineni, X. Lu, D. J. Choi, A. Majumdar, and D. K. Panda, Experiences and Benefits of Running RDMA Hadoop and Spark on SDSC Comet, XSEDE’16, July 2016
HPCAC‐China ‘17 43Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR‐IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
HPCAC‐China ‘17 44Network Based Computing Laboratory
• Deep Learning frameworks are a different game altogether– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State‐of‐the‐art– cuDNN, cuBLAS, NCCL ‐‐> scale‐up performance
– CUDA‐Aware MPI ‐‐> scale‐out performance
• For small and medium message sizes only!
• Proposed: Can we co‐design the MPI runtime (MVAPICH2‐GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large‐Message Communication (Reductions)
– What application co‐designs are needed to exploit communication‐runtime co‐designs?
Deep Learning: New Challenges for MPI Runtimes
Scale‐up
Perform
ance
Scale‐out Performance
cuDNN
NCCL
gRPC
Hadoop
ProposedCo‐Designs
MPIcuBLAS
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S‐Caffe: Co‐designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
HPCAC‐China ‘17 45Network Based Computing Laboratory
• NCCL 1.x had some limitations– Only worked for a single node; no scale‐out on multiple nodes
– Degradation across IOH (socket) for scale‐up (within a node)
• We propose optimized MPI_Bcast design that exploits NCCL [1]
– Communication of very large GPU buffers
– Scale‐out on large number of dense multi‐GPU nodes
Efficient Broadcast: MVAPICH2‐GDR and NCCL
1. A. A. Awan, K. Hamidouche, A. Venkatesh, and D. K. Panda, Efficient Large Message Broadcast using NCCL and CUDA‐Aware MPI for Deep Learning. In Proceedings of the 23rd European MPI Users' Group Meeting (EuroMPI 2016). [Best Paper Nominee]
2. A. A. Awan, C‐H. Chu, H. Subramoni, and D. K. Panda. Optimized Broadcast for Deep Learning Workloads on Dense‐GPU InfiniBand Clusters: MPI or NCCL?, arXiv ’17 (https://arxiv.org/abs/1707.09414)
1
32
4
Internode Comm. (Knomial)
1 2
CPU
PLX
3 4
PLX
Intranode Comm.(NCCL Ring)
Ring Direction
051015202530
2 4 8 32 64 128
Training
Time (secon
ds)
No. of GPUs
MV2‐GDR‐NCCL MV2‐GDR‐Opt
• Hierarchical Communication that efficiently exploits:– CUDA‐Aware MPI_Bcast in MV2‐GDR
– NCCL Broadcast for intra‐node transfers
• Can pure MPI‐level designs be done that achieve similar or better performance than NCCL‐based approach? [2]
051015202530
2 4 8 16 32 64
Training
Time (secon
ds)
No. of GPUs
MV2‐GDR MV2‐GDR‐NCCL
VGG Training with CNTK
HPCAC‐China ‘17 46Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning framework.
• Benefits and Weaknesses– Multi‐GPU Training within a single node
– Performance degradation for GPUs across different sockets
– Limited Scale‐out
• OSU‐Caffe: MPI‐based Parallel Training – Enable Scale‐up (within a node) and Scale‐out (across
multi‐GPU nodes)
– Scale‐out on 64 GPUs for training CIFAR‐10 network on CIFAR‐10 dataset
– Scale‐out on 128 GPUs for training GoogLeNet network on ImageNet dataset
OSU‐Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
Training
Tim
e (secon
ds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU‐Caffe (1024) OSU‐Caffe (2048)
Invalid use caseOSU‐Caffe publicly available from
http://hidl.cse.ohio‐state.edu/
HPCAC‐China ‘17 47Network Based Computing Laboratory
0100200300
2 4 8 16 32 64 128
Latency (m
s)
Message Size (MB)
Reduce – 192 GPUsLarge Message Optimized Collectives for Deep Learning
0
100
200
128 160 192
Latency (m
s)
No. of GPUs
Reduce – 64 MB
0100200300
16 32 64
Latency (m
s)
No. of GPUs
Allreduce ‐ 128 MB
0
50
100
2 4 8 16 32 64 128
Latency (m
s)
Message Size (MB)
Bcast – 64 GPUs
0
50
100
16 32 64
Latency (m
s)
No. of GPUs
Bcast 128 MB
• MV2‐GDR provides optimized collectives for large message sizes
• Optimized Reduce, Allreduce, and Bcast
• Good scaling with large number of GPUs
• Available in MVAPICH2‐GDR 2.2GA
0100200300
2 4 8 16 32 64 128
Latency (m
s)
Message Size (MB)
Allreduce – 64 GPUs
HPCAC‐China ‘17 48Network Based Computing Laboratory
• Initial Evaluation shows promising performance gains for MVAPICH2‐GDR 2.3a* compared to Baidu‐allreduce
MVAPICH2‐GDR vs. Baidu‐allreduce
*MVAPICH2‐GDR 2.3a will be available soon
110
1001000100001000001000000
Latency (us) ‐log scale
Message Size (bytes)
8 GPUs (4 nodes log scale‐allreduce vs MVAPICH2‐GDR (MPI_Allreduce)
Baidu‐allreduce MVAPICH2‐GDR
~30X better~11% improvement
HPCAC‐China ‘17 49Network Based Computing Laboratory
X. Lu, H. Shi, M. H. Javed, R. Biswas, and D. K. Panda, Characterizing Deep Learning over Big Data (DLoBD) Stacks on RDMA‐capable Networks, HotI 2017.
High‐Performance Deep Learning over Big Data (DLoBD) Stacks• Benefits of Deep Learning over Big Data (DLoBD)
Easily integrate deep learning components into Big Data processing workflow
Easily access the stored data in Big Data systems No need to set up new dedicated deep learning clusters;
Reuse existing big data analytics clusters • Challenges
Can RDMA‐based designs in DLoBD stacks improve performance, scalability, and resource utilization on high‐performance interconnects, GPUs, and multi‐core CPUs?
What are the performance characteristics of representative DLoBD stacks on RDMA networks?
• Characterization on DLoBD Stacks CaffeOnSpark, TensorFlowOnSpark, and BigDL IPoIB vs. RDMA; In‐band communication vs. Out‐of‐band
communication; CPU vs. GPU; etc. Performance, accuracy, scalability, and resource utilization RDMA‐based DLoBD stacks (e.g., BigDL over RDMA‐Spark)
can achieve 2.6x speedup compared to the IPoIB based scheme, while maintain similar accuracy
0
20
40
60
10
1010
2010
3010
4010
5010
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Acc
urac
y (%
)
Epoc
hs T
ime
(sec
s)
Epoch Number
IPoIB-TimeRDMA-TimeIPoIB-AccuracyRDMA-Accuracy
2.6X
HPCAC‐China ‘17 50Network Based Computing Laboratory
• Traditional HPC– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Big Data/Enterprise/Commercial Computing– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData– Virtualization with SR‐IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
HPCAC‐China ‘17 51Network Based Computing Laboratory
• Virtualization has many benefits– Fault‐tolerance– Job migration– Compaction
• Have not been very popular in HPC due to overhead associated with Virtualization
• New SR‐IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR‐IOV• MVAPICH2‐Virt 2.2 supports:
– OpenStack, Docker, and singularity
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter‐VM Shmem Benefit MPI Applications on SR‐IOV based Virtualized InfiniBand Clusters? EuroPar'14J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR‐IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR‐IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
HPCAC‐China ‘17 52Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
Execution Time (s)
MV2‐SR‐IOV‐Def
MV2‐SR‐IOV‐Opt
MV2‐Native
1%9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
Execution Time (m
s)
Problem Size (Scale, Edgefactor)
MV2‐SR‐IOV‐Def
MV2‐SR‐IOV‐Opt
MV2‐Native2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2‐5% overhead for Graph500 with 128 Procs
• Compared to Native, 1‐9.5% overhead for SPEC MPI2007 with 128 Procs
Application‐Level Performance on Chameleon
SPEC MPI2007Graph500
5%
HPCAC‐China ‘17 53Network Based Computing Laboratory
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
Execution Time (s)
Container‐Def
Container‐Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container‐Def, up to 11% and 73% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 5% overhead for NAS and Graph 500
Application‐Level Performance on Docker with MVAPICH2
Graph 500NAS
11%
0
50
100
150
200
250
300
1Cont*16P 2Conts*8P 4Conts*4P
BFS Execution Time (m
s)
Scale, Edgefactor (20,16)
Container‐Def
Container‐Opt
Native
73%
HPCAC‐China ‘17 54Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
22,16 22,20 24,16 24,20 26,16 26,20
BFS Execution Time (m
s)
Problem Size (Scale, Edgefactor)
Graph500
Singularity
Native
0
50
100
150
200
250
300
CG EP FT IS LU MG
Execution Time (s)
NPB Class D
Singularity
Native
• 512 Processes across 32 nodes
• Less than 7% and 6% overhead for NPB and Graph500, respectively
Application‐Level Performance on Singularity with MVAPICH2
7%
6%
J. Zhang, X .Lu and D. K. Panda, Is Singularity‐based Container Technology Ready for Running MPI Applications on HPC Clouds?, UCC ’17
HPCAC‐China ‘17 55Network Based Computing Laboratory
• Next generation HPC systems need to be designed with a holistic view of Big Data, Deep Learning and Cloud
• Presented some of the approaches and results along these directions
• Enable Big Data, Deep Learning and Cloud community to take advantage of modern HPC technologies
• Many other open issues need to be solved
• Next generation HPC professionals need to be trained along these directions
Concluding Remarks
HPCAC‐China ‘17 56Network Based Computing Laboratory
Funding AcknowledgmentsFunding Support by
Equipment Support by
HPCAC‐China ‘17 57Network Based Computing Laboratory
Personnel AcknowledgmentsCurrent Students
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.‐H. Chu (Ph.D.)
Past Students – A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist– K. Hamidouche
– S. Sur
Past Post‐Docs– D. Banerjee
– X. Besseron
– H.‐W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– S. Guganani (Ph.D.)
– J. Hashmi (Ph.D.)
– N. Islam (Ph.D.)
– M. Li (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists– X. Lu
– H. Subramoni
Past Programmers– D. Bureddy
– J. Perkins
Current Research Specialist– J. Smith
– M. Arnold
– M. Rahman (Ph.D.)
– D. Shankar (Ph.D.)
– A. Venkatesh (Ph.D.)
– J. Zhang (Ph.D.)
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post‐doc– A. Ruhela
HPCAC‐China ‘17 58Network Based Computing Laboratory
• Looking for Bright and Enthusiastic Personnel to join as
– Post‐Doctoral Researchers (博士后)
– PhD Students (博士研究生)
– MPI Programmer/Software Engineer (MPI软件开发工程师)
– Hadoop/Big Data Programmer/Software Engineer (大数据软件开发工程师)
• If interested, please contact me at this conference and/or send an e‐mail to [email protected]‐state.edu or [email protected]‐state.edu
Multiple Positions Available in My Group
HPCAC‐China ‘17 59Network Based Computing Laboratory
Thank You!
Network‐Based Computing Laboratoryhttp://nowlab.cse.ohio‐state.edu/
[email protected]‐state.edu
The High‐Performance MPI/PGAS Projecthttp://mvapich.cse.ohio‐state.edu/
The High‐Performance Deep Learning Projecthttp://hidl.cse.ohio‐state.edu/
The High‐Performance Big Data Projecthttp://hibd.cse.ohio‐state.edu/