intel® omni-path tutorial -...

40
Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical Lead Omni Path Libraries Sayantan Sur Senior Software Engineer

Upload: others

Post on 08-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path TutorialRavindra Babu Ganapathi

Product Owner/ Technical Lead Omni Path LibrariesSayantan Sur

Senior Software Engineer

Page 2: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

2

Legal Notices and DisclaimersINFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life-saving, life-sustaining, critical control or safety systems, or in nuclear facility applications.

Intel products may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel may make changes to dates, specifications, product descriptions, and plans referenced in this document at any time, without notice.

This document may contain information on products in the design phase of development. The information herein is subject to change without notice. Do not finalize a design with this information.

Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them.

Intel Corporation or its subsidiaries in the United States and other countries may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights.

Wireless connectivity and some features may require you to purchase additional software, services or external hardware.

Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations

Intel, the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Other names and brands may be claimed as the property of others.

Copyright © 2017 Intel Corporation. All rights reserved.

Page 3: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

What’s new with Intel® Omni-Path Architecture

Page 4: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Intel® OPA Hitting Its Stride in 100G HPC Fabrics

Top500 listings continue to grow

4 Top 15 systems, 13 Top 100 systems

36% more systems from Nov 2016 list

First Skylake systems

And almost 10% of Top500 Rmax performance

New deployments all over the globe

All geos, and new deployments in HPC Cloud and AI

Expanding capabilities

NVMe over Fabric, Multi-modal Data Acceleration (MDA), and robust support for heterogeneous clusters

4

Source: Top500.org

Page 5: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Intel® OPA in the HPC Cloud Today

https://www.hpcwire.com/2017/04/18/knights-landing-processor-omni-path-makes-cloud-debut/

PenguinRescale

http://insidehpc.com/2017/03/penguin-computing-adds-omni-path-lustre-hpc-cloud/

Cloud providers deploying Intel® Omni-Path in single tenant, non virtualized HPC cloud environments

5

Page 6: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Next Up for Intel® OPA: Artificial Intelligence

Intel offers a complete AI Portfolio

From CPUs to software to computer vision to libraries and tools

Intel® OPA advantages: Breakthrough performance on scale-out apps

Low latency

High bandwidth

High message rate

GPU Direct RDMA support

Xeon Phi Integration

6

Things& devices

CloudDATA Center

Accelerant Technologies

World-class interconnect solution for shorter time to train

Page 7: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

NVMe* over OPA: Ready for Prime Time

OPA + Intel® Optane™ Technology

High Endurance

Low latency

High Efficiency

Total NVMe over Fabric Solution!

NVMe-over-OPA status

Supported in 10.4.1 release

Compliant with NVMeF specs 1.0

Target and Host system configuration: 2 x Intel® Xeon® CPU E5-2699 v3 @ 2.30Ghz, Intel® Server Board S2600WT, 128GB DDR4, CentOS 7.3.1611, kernel 4.10.12, IFS 10.4.1, NULL-BLK, FIO 2.19 options hfi1 krcvqs=8 sge_copy_mode=2 wss_threshold=70

7*Other names and brands may be claimed as the property of others.

~1.5M 4k Random IOPS!99% Bandwidth Efficiency!

Only Intel is delivering a total NVMe over Fabric solution!

Page 8: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Xeon Phi™Processor-F

(Knights Landing)

Intel® Omni-Path Maximizing Support for Heterogeneous Clusters

Intel Xeon Processor(Haswell and

Broadwell)

PCICard

Xeon Phi™Processor

(KnightsLanding)

HFI

Greater flexibility for creating compute islands depending on user requirements

8

WFR HFI

Intel Xeon Processor-F

(Skylake-F)

HFI

WFR HFI

Intel Xeon Processor-F

(Skylake-F)

HFI

GPU GPU

GPU memory GPU memory

PCI bus

Intel Xeon Processor-F

(Skylake-F)

HFI

GPU Direct v3 support in 10.3 release

PCICard

Page 9: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

9

Multi-Modal Data Acceleration (MDA):Optimizing Data Movement through the Fabric

Applications

Intel® Omni-PathEnhanced Switching Fabric

Intel® Omni-Path Wire Transport

I/O FocusedUpper Layer

Protocols (ULPs)

VerbsProvider / Driver

Intel® Omni-PathPSM2

Inte

l® M

PI

Op

en

MP

I

MV

AP

ICH

2

IBM

Sp

ect

rum

&

Pla

tfo

rm M

PI

SH

ME

M

Intel® Omni-PathHost Fabric Interface (HFI)

Designed for Performance at Extreme Scale

Accelerated RDMA

Multi-Modal Data AccelerationAutomatically chooses the most efficient data transfer path

VERBSStorage traffic

(Large data packets, bandwidth sensitive)

MPIHPC app traffic

(Small / Med data packets, latency, bandwidth sensitive)

Accelerated RDMA: Performance enhancements for large message read or writes

Perf Scaled Messaging 2 (PSM2):Efficient support for MPI, 1/10 the code path, high b/w and message rate, consistent latency independent of cluster scale

Page 10: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

PSM2 Enhancements

Page 11: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Intel® Omni-Path Architecture HPCDesign Focus Architected for Your MPI Application

Applications

Intel® Omni-Path

Enhanced Switching Fabric

Intel® Omni-Path Wire Transport

I/O Focused

Upper Layer

Protocols

(ULPs)

Verbs

Provider

and Driver

Intel® Omni-Path

PSM2

Inte

l®M

PI

Op

en

MP

I

MV

AP

ICH

2

IBM

Pla

tfo

rm

MP

I

SH

ME

M

Intel® Omni-Path

Host Fabric Interface (HFI)

Designed for Performance at Extreme Scale

Libfabric

SH

ME

M

Inte

l®M

PI

Op

en

MP

I

MP

ICH

/CH

4

Emerging

OpenFabrics Interface

(OFI) enabled HPC

middleware

Page 12: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

12

PSM2 – Performance Scaled Messaging 2

• PSM2 API is a low level high performing communication interface

• Semantics matches with that of compute middleware such as MPI and/or OFI

• PSM2 EP maps to HW context, Each EP associated with matched queue

• API’s are optimized for both latency and bandwidth

• PIO/Eager for small message latency

• DMA/Expected for optimal Bandwidth with large message size

• Intel MPI, Open MPI and MVAPICH2 use PSM2 transport for Omni-Path Fabric

• PSM2 Programmer’s Guide available @ https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_PSM2_PG_H76473_v6_0.pdf

Page 13: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Copyright © 2016, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.

Optimization Notice

MQ0

EP0PSM EP

MPI Rank

MQ1

EP1

MQ2

EP2

MQ3

EP3

User App Threads(Independent of PSM threads)

MPI Threads: EP and MQ creation are not thread safe, One EP/thread

CX0 CX1 CX2 CX3HFI

Single Thread per EP

PSM2 Multi Endpoint

OFI Scalable EP

Page 14: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

PSM2 Multiple Endpoint

• OFI PSM2 provider now supports Scalable Endpoints

• Single process opening and concurrently using PSM2 Endpoints

• Endpoints are directly mapped to hardware context giving full parallelism

• Enables hybrid programming models such as MPI + OpenMP

• Performance is scaling similar to multiple MPI processes

• One MPI process with multiple threads can saturate multiple HFI

• Multiple threads using a single endpoint not supported

• Higher level software expected to use locks

14

Page 15: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

15Intel® Omni-Path Architecture

Inter and Intra-Node GPUDirect

GPUDirect RDMA v3 Support

GPUDirect RDMA support

Optimized Transfer through Host Buffer

GPUDirect P2P Intra-Node Support

Ranks/processes within the same node

Both single- and multi-GPU support

Direct GPU to GPU acceleration

Memory

CPU

GPU Memory

GPU

PCIe

GPU Memory

GPU

Page 16: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

GPU Buffer Transfer Latency & Bandwidth - Ohio State MicroBenchmarks v5.3.2Intel® Omni-Path Architecture - Intel® Xeon® CPU E5-2697 v3NVIDIA* Corporation GK110BGL [Tesla K40c]*

16

0

10

20

30

40

50

60

70

80

90

100

1 100 10000 1000000

La

ten

cy (

mic

rose

con

ds)

Msg Size (Bytes)

osu_latency -d cuda D D

0

5000

10000

15000

20000

25000

1 100 10000 1000000

Ba

nd

wid

th (

MB

/s)

Msg Size (Bytes)

osu_bibw -d cuda D D

Intel® Xeon® processor E5-2697 v3, Red Hat Enterprise Linux Server release 7.2 (Maipo), 3.10.0-327.el7.x86_64. 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] OSU Microbenchmarks version 5.3.2 Open MPI 1.10.4-cuda-hfi as packaged with IFS 10.5.0.0.119. GPU device connected to second socket. Benchmark processes pinned to second socket. Dual socket servers connected back to back with no switch hop. Intel® Turbo Boost Technology enabled, Intel® Hyper-Threading Technology enabled.

Example run command: mpirun -np 2 --map-by ppr:1:node -host host1,host2 -x PSM2_CUDA=1 -x PSM2_GPUDIRECT=1 -x HFI_UNIT=1 --allow-run-as-root taskset -c 14 ./osu_latency -d cuda D D

Last updated: June 29, 2017

Latency Uni-dir Bandwidth Bi-dir Bandwidth

PSM2_GPUDIRECT_RECV_THRESH Allows you to specify a threshold value (in bytes). If the threshold is exceeded, the GPUDirect* RDMA feature will not be used on the receive side of a connection.Default: PSM2_GPUDIRECT_RECV_THRESH=0 (specified in bytes)

PSM2_GPUDIRECT_SEND_THRESHAllows you to specify a threshold value (in bytes). If the threshold is exceeded, the GPUDirect* RDMA feature will not be used on the send side of a connection.Default: PSM2_GPUDIRECT_SEND_THRESH=30000 (specified in bytes)

0

2500

5000

7500

10000

12500

1 100 10000 1000000

osu_bw -d cuda D D

~10GB/s ~17.5GB/s

~7 µsec

Page 17: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

0

2500

5000

7500

10000

12500

1 10 100 1000 10000 100000 1000000 10000000

Ba

nd

wid

th (

MB

/s)

Msg Size (Bytes)

osu_bw -d cuda D D

GPU Buffer Transfer Latency & Bandwidth - Ohio State MicroBenchmarks v5.3.2Intel® Omni-Path Architecture - Intel® Xeon® CPU E5-2697 v3NVIDIA* Corporation GK110BGL [Tesla K40c]*

17

Intel® Xeon® processor E5-2697 v3, Red Hat Enterprise Linux Server release 7.2 (Maipo), 3.10.0-327.el7.x86_64. 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] OSU Microbenchmarks version 5.3.2 Open MPI 1.10.4-cuda-hfi as packaged with IFS 10.5.0.0.119. GPU device connected to second socket. Benchmark processes pinned to second socket. Dual socket servers connected back to back with no switch hop. Intel® Turbo Boost Technology enabled, Intel® Hyper-Threading Technology enabled.

Example run command: mpirun -np 2 --map-by ppr:1:node -host host1,host2 -x PSM2_CUDA=1 -x PSM2_GPUDIRECT=1 -x HFI_UNIT=0 -x PSM2_GPUDIRECT_RECV_THRESH=8192 --allow-run-as-root taskset -c 14 ./bw -d cuda D D

Last updated: June 29, 2017

Uni-dir Bandwidth

PSM2_GPUDIRECT_RECV_THRESH Allows you to specify a threshold value (in bytes). If the threshold is exceeded, the GPUDirect* RDMA feature will not be used on the receive side of a connection.Default: PSM2_GPUDIRECT_RECV_THRESH=0 (specified in bytes)

PSM2_GPUDIRECT_SEND_THRESHAllows you to specify a threshold value (in bytes). If the threshold is exceeded, the GPUDirect* RDMA feature will not be used on the send side of a connection.Default: PSM2_GPUDIRECT_SEND_THRESH=30000 (specified in bytes)

GPUDirect* works best when both the GPU and HFI are in the same PCIe* root complex. Two environment variables are designed to improve performance:

GPU and HFI on same CPU socket

GPU and HFI on different CPU socketsGPU and HFI on different CPU sockets with PSM2_GPUDIRECT_RECV_THRESH=8192

(bytes)

Page 18: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

PSM2 Multi-Rail and Device Selection Enhancements

• PSM2 env variables for optimal app performance

• PSM2 MULTIRAIL: Takes the value 0 to 2• Value 0: multi-rail turned off

• Value 1: multi-rail across all the HFI within the node

• Value 2: multi-rail within a NUMA domain (root complex)

• PSM2 MULTIRAIL MAP

• unit:port, unit:port, unit:port…

• HFI SELECTION ALG • “Round Robin”

• “Packed”

18

Page 19: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

19

NUMA Configuration: Four Devices per Node

Dual-socket Intel® Xeon CPU E5-2699 v4 (commonly labeled Broadwell cores) , frequency of 2.2 GHz for all the cores. Intel® Hyper-threading technology turned on. Each system has 4 Intel® OPA in each of the 4 x16 PCIe slots and Intel® QuickPath Interconnect (QPI) link connection between the sockets. Two such servers are connected over a switch fabric using Omni-Path Edge switch.

Page 20: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

20

Platform Impact: OSU Latency, Local vs Remote HFI

Up to 32%

Dual-socket Intel® Xeon CPU E5-2699 v4 (commonly labeled Broadwell cores) , frequency of 2.2 GHz for all the cores. Intel® Hyper-threading technology turned on. Each system has 4 Intel® OPA in each of the 4 x16 PCIe slots and Intel® QuickPath Interconnect (QPI) link connection between the sockets. Two such servers are connected over a switch fabric using Omni-Path Edge switch.

s< 1 µsec latency with

switch hop

Page 21: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Accelerated RDMA

Page 22: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture Intel ConfidentialIntel Confidential

Intel® Omni-Path Architecture (Intel® OPA)RDMA Bandwidth Performance - Accelerated RDMA

Last Update: March 10 2017

Intel® Xeon® processor E5-2697a v4, Intel® Turbo Boost Technology enabled, Intel® Hyper-Threading Technology disabled. Memory: 2133 MHz DDR4 per node . 3.10.0-327.36.3.el7.x86_64, RHEL 7.2. Intel® OPA: ib_write_bw version 5.33. Average bandwidth is reported. Server side command: ib_write_bw -a (or -s Size) -F -R. Client side command: ib_write_bw -a (or -s Size) -F -R <server IPoIB address>. Intel® OPA HFI - Intel Corporation Device 24f0 – Series 100 HFI ASIC. OPA Switch: Series 100 Edge Switch – 48 port. IFS 10.3.1.0.22. Contents of /etc/modprobe.d/hfi1.conf: options hfi1 cap_mask=0x4c09a01cbba eager_buffer_size=8388608 krcvqs=4 pcie_caps=0x51 max_mtu=10240. 1. CPU statistics measured from /proc/stat immediately before and after the benchmark. Total CPU % is calculated as △(user + system)/ △(user + system + idle) for all cores. At 1 GB message size, ib_read_bw client : 6.24% CPU Acc. RDMA disabled, 3.70% CPU Acc. RDMA enabled. ib_write_bw server: 6.23% CPU Acc RDMA disabled, 3.70% CPU Acc. RDMA enabled

1B 100B 10KB 1MB 100MB 10GB

64MB

256MB

1GB

HIGHER is Better

0

2,500

5,000

7,500

10,000

12,500

Ba

nd

wid

th (

MB

/s)

Message Size

ib_read_bw - 1 core per node

0

2,500

5,000

7,500

10,000

12,500

Ba

nd

wid

th (

MB

/s)

Message Size

ib_write_bw - 1 core per node

1B 100B 10KB 1MB 100MB 10GB

Up to 40% lower CPU utilization with Accelerated RDMA enabled1

Supporting high bandwidth across message sizes

1B 100B 10KB 1MB 100MB 10GB

64MB

256MB

1GB

64MB

256MB

1GBKey:

Acc RDMA enabled

Multiple Data Transmission methodsSmall Medium Large

22

Page 23: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

SKL Performance

Page 24: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Intel® QuickPath Interconnect vs UltraPath Interconnect

24

Node 1 Node 2

Intel® OPAlink

Intel® OPAlink

PCIe* Link PCIe* Link

Example: Local MPI processes on dual socket servers (no QPI or UPI involvement) :

“Local socket” latency

For more detail on Intel® UPI see: https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview

• Previous generation Intel® Xeon® processors were interconnected with the Intel® QuickPath Interconnect (QPI)

• Intel® Xeon® Scalable Family processors (codename Skylake) are now connected with the Intel® UltraPath Interconnect

Page 25: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Intel® QuickPath Interconnect vs UltraPath Interconnect

For systems with one Intel® OPA adapter installed on a single socket, any data communication involving cores on the other socket need to traverse the QPI/UPI

25

Node 1Node 2

Intel® OPAlink

Intel® OPAlink

PCIe* Link PCIe* Link

Example: Remote MPI processes on dual socket servers:

QPI QPI

“Remote socket” latency

UPI UPI

For more detail on Intel® UPI see: https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview

Page 26: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

Benefits of the UPI vs QPI for processes on the remote socket:

26

MPI Latency using local socket

MPI Latency using remote socket

Difference is the intra-socket latency (x2 servers)

Intel® Xeon® E5-2697A v4 CPU1

QPI

Intel® Xeon®Platinum 8170 CPU2

UPI

1.13 µsec 1.10 µsec

1.38 µsec 1.26 µsec

0.25 µsec 0.16 µsec

up to 36% faster

See configuration item #2 *CPU frequency fixed to 2.1 GHz for both processors

Intel® QuickPath Interconnect vs UltraPath Interconnect

For more detail on Intel® UPI see: https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview

Page 27: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

27Intel® Omni-Path

Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

27

MPI Collective Performance - Ohio State Microbenchmarksmpi process local to hfi

See configuration item #3

osu_barrier osu_bcast osu_allgather osu_allreduce osu_alltoall

▼1%

▼3%

▲27%

▼8%

▼8%

▼4%

▼10%

▼14%

LOWERis Better Intel® Xeon® E5-2697a v4 processor2 Intel® Xeon® Platinum 8170 processor1

▼9%

▼33%

▲0.5%

▼2%

Avg 8 B 16 KB 1 MB 8 B 16 KB 1 MB 8 B 16 KB 1 MB 8 B 16 KB 1 MB

La

ten

cy (

µse

c)

Message Size

% Time relative to Intel® Xeon® E5-2697a V4(Lower is Better)

Optimization underway

For more detail on Intel® UPI see: https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview

Page 28: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

28Intel® Omni-Path

Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

28

osu_barrier osu_bcast osu_allgather osu_allreduce osu_alltoall

▼6%

▼16%

▲21%

▼29%

▼10%

▼7%

▼39%

▼14%

LOWERis Better Intel® Xeon® E5-2697a v4 processor2 Intel® Xeon® Platinum 8170 processor1

▼22%

▼37%

▼14%

▼7%

Avg 8 B 16 KB 1 MB 8 B 16 KB 1 MB 8 B 16 KB 1 MB 8 B 16 KB 1 MB

La

ten

cy (

µse

c)

Message Size

% Time relative to Intel® Xeon® E5-2697a V4(Lower is Better)

See configuration item #3

Optimization underway

MPI Collective Performance - Ohio State Microbenchmarksmpi process remote to hfi

For more detail on Intel® UPI see: https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview

Page 29: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

29Intel® Omni-Path

Architecture

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Copyright © 2017, Intel Corporation.*Other names and brands may be claimed as the property of others.

29

MPI Collective Performance - Ohio State MicrobenchmarksIntel® QuickPath vs UltraPath Interconnect

osu_barrier osu_bcast osu_allgather osu_allreduce osu_alltoall

LOWERis Better Intel® Xeon® E5-2697a v4 processor2 with QPI Intel® Xeon® Platinum 8170 processor1 with UPI

0

20

40

60

80

100

Avg 8 B 16 KB 1 MB 8 B 16 KB 1 MB 8 B 16 KB 1 MB 8 B 16 KB 1 MB

La

ten

cy I

ncr

ea

se d

ue

to

Re

mo

te

So

cke

t (%

)

Message Size

Significant improvement due

to UPI

See configuration item #3

For more detail on Intel® UPI see: https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview

Page 30: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

OFI over Omni-Path

Page 31: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Libfabric on Intel® OPA

31

• Libfabric PSM2 provider uses the public PSM2 API

• Semantics matches with that of compute middleware such as MPI

• PSM2 EP maps to HW context, Each EP associated with matched queue

• Regular Endpoints share a single HW context

• Scalable Endpoints provide independent HW contexts

• PSM2 library uses PIO/Eager for small message andDMA/Expected for Bandwidth for large messages

• Current focus is full OFI functionality

• Performance optimizations are possible to reduce call overheads, and provide better semantic mapping of OFI directly to HFI

Libfabric Framework

PSM2 Provider

Fabric Interfaces

Kernel Interfaces

HFI

PSM2 Library

Page 32: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Intel® OPA/PSM Provider Performance

32

Performance is comparable for most messages• 6.7% overhead for 64B latency, and 6.1% overhead for 16B bandwidth• Optimizations are ongoing

Intel Xeon E5-2697 (Ivy Bridge) 2.6GHz; Intel Omni-Path; RHEL 7.3; libfabric 1.5.0 alpha; IFS 10.5.0.0.115

0

0.2

0.4

0.6

0.8

1

1.21 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

51

2K

1M

2M

4M

Ra

tio

Message Size (Bytes)

Relative Latency (Netpipe)

OFI/PSM2 PSM2

0

0.2

0.4

0.6

0.8

1

1.2

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

51

2K

1M

2M

4M

Ra

tio

Message Size (Bytes)

Relative Bandwidth (Netpipe)

OFI/PSM2 PSM2

Page 33: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

MPI/OFI Design Principles

33

Directly map MPI constructs to OFI features as much as possible

Focus on optimizing communication calls

• Avoid branch, function calls, cache misses, memory allocation

MPI OFI

MPI_Send, MPI_Recv, etc. fi_tsend, fi_trecv

Window Memory region (fi_mr(3))

Comm. calls(MPI_Put, MPI_Get, etc…)

fi_write, fi_read, … (fi_rma(3))

Atomic comm. calls(MPI_Accumulate, etc…)

fi_atomic, … (fi_atomic(3))

Page 34: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

MPICH-OFI

34

Open-source implementation based on MPICH

• Uses the new CH4 infrastructure

• Co-designed with MPICH community

• Targets existing and new fabrics via Open Fabrics Interface (OFI)

• Ethernet/sockets, Intel® Omni-Path, Cray Aries*, IBM BG/Q*, InfiniBand*

• OFI is intended as the default communication interface for future versions MPICH

Page 35: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Improving on MPICH/CH3 Overheads

35

CH4 significantly reduces MPI instructions

MPICH/CH4: - 35 instructions with static build- 85 instructions with dynamic build- Data collected using GCC v5.2.1on Xeon E5-2697 (Haswell)

0 500 1000 1500

MPICH/CH4/OFI/PSM2

MPICH/CH3/OFI/PSM2

MPI_Send Instruction Counts (dynamic link)

MPI OFI/PSM2

Page 36: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

MPICH Performance Comparison (KNL + OPA)

36

Around 20% reduction in latency, gains in bandwidth for small messages

Intel Xeon Phi 7250 (Knights Landing) 1.4GHz; Intel Omni-Path; RHEL 7.3; GCC v5.3.1; OSU Micro-benchmarks 5.3

0

0.2

0.4

0.6

0.8

1

1.2

0 1 2 4 8 16 32 64

Rat

io

Message Size (Bytes)

Relative Latency (OSU Microbenchmarks)

CH4/OFI/PSM2 CH3/OFI/PSM2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1 2 4 8 16 32 64 128

Rat

io

Message Size (Bytes)

Relative Bandwidth (OSU Microbenchmarks)

CH4/OFI/PSM2 CH3/OFI/PSM2

Page 37: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Thread-split usage model with Intel® MPI LibraryOpenMP* example

MPI_Comm split_comm[n];

int main() {

int i, provided;

MPI_Init_thread(NULL, NULL, MPI_THREAD_MULTIPLE, &provided);

for (i = 0; i < n; i++)

MPI_Comm_dup(MPI_COMM_WORLD, &split_comm[i]);

#pragma omp parallel for num_threads(n)

for (i = 0; i < n; i++) {

int j = i;

MPI_Allreduce(MPI_IN_PLACE, &j, 1, MPI_INT, MPI_SUM, split_comm[i]);

}

MPI_Finalize();

}

37

Create thread-split

communicators

Use thread-split

communicator per thread

Page 38: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Uni-dir BW MT benchmark

omp parallel

{

Rank #0, thread #i:

n x Isend(msg, comm #i)

Waitall(n x sreq)

Recv(acc, comm #i)

Rank #1, thread #i:

n x Irecv(msg, comm #i)

Waitall(n x rreq)

Send(acc, comm #i)

Barrier(comm #i)

omp barrier

}

38

Early results with Intel® MPI Library prototype utilizing multiple endpoints

0

5000

10000

15000

20000

25000

Un

i-d

ir B

W, M

B/s

ec

Bytes

Multi-threads vs. Multi-ranks Uni-directional Bandwidth, 2 nodes

1 rank, 1 thread per node

1 rank, 2 threads per node

2 ranks, 1 thread per node

1 rank, 8 threads per node

8 ranks, 1 thread per node

~25GB/s

High concurrency~multi-rank

Uni-dir BW MT Benchmark (described on the right)Intel® Omni-Path Architecture, dual-rail - Intel® Xeon® CPU E5-2699 v4 2.2GHzThis feature may be included in future versions of Intel MPI

Page 39: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical

Intel® Omni-Path Architecture

Legal Disclaimer & Optimization Notice

39

INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS”. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Copyright© 2017, Intel Corporation. All rights reserved. Intel, the Intel logo, Atom, Xeon, Xeon Phi, Core, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries.

Optimization Notice

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

Page 40: Intel® Omni-Path Tutorial - MUGmug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/17/… · Intel® Omni-Path Tutorial Ravindra Babu Ganapathi Product Owner/ Technical