steve thorne & ananth sankaranarayanan… · steve thorne & ananth sankaranarayanan....

66

Upload: others

Post on 10-Aug-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems
Page 2: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Steve Thorne & Ananth SankaranarayananNovember 15 2018

Advancing AI Performance with Intel® Xeon® Scalable Systems

Page 3: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.

Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.

Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate.

Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.

Intel, the Intel logo, Pentium, Celeron, Atom, Core, Xeon, Movidius, Saffron and others are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

© Intel Corporation.

notices & disclaimers

Page 4: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Advancing AI Performance with Intel® Xeon® Scalable

Time to solution for production ai

Maximize performance use optimized sw

Workload & scalability flexibility

Journey to Production AI

Deep Learning in Data Centers

Intel AI Focus Pillars

Page 5: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Journey to

production ai

Page 6: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

4 TBAutonomous vehicle

5 TBCONNECTED AIRPLANE

1 PBSmart Factory

1.5 GBAverage internet user

750 pBCloud video Provider

Daily By 2020

Source: Amalgamation of analyst data and Intel analysis.

Business Insights

OperationalInsightsSecurity Insights

INTERSECTION of DATA and compute growth

Page 7: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Consumer Health Finance Retail Government Energy Transport Industrial Other

Insurance Provider

Kyoto University

US market exchange

JD.com NASA Leading oil company

SERPRO Solar Farm

Intel Projects

Recommend products to customers

Computational screen drug candidates

Time-based pattern detection

Image recognition

Furthered the search for lunar

ice

Detect and classify corrosion levels

Issuing traffic tickets for violations

recorded by cameras

Automate increased inspection frequency

Silicon packaging inspection

Increased accuracy 1

Chose Xeon over GPU, decreased

time & cost 2

Reduction in search costs 3

Gain by switching from GPUs to

Xeon 4

Automating lunar crater detection

using a CNN 5

Full automation 6 Full automation 7 Reduced time to train 8

Significant speedup 9

AI with intel

1. https://saffrontech.com/wp-content/uploads/2017/07/Personalization-Case-Study_Final-1.pdf 2. https://insidehpc.com/2016/07/superior-performance-commits-kyoto-university-to-cpus-over-gpus 3. Refer slide 64 4. https://software.intel.com/en-us/articles/building-large-scale-image-feature-extraction-with-bigdl-at-jdcom 5. Intel.ly/fdl2017 6. Refer slide 65 7. Refer slide 66 8. https://software.intel.com/en-us/articles/automatic-defect-inspection-using-deep-learning-for-solar-farm 9.https://builders.intel.com/docs/aibuilders/manufacturing-package-fault-detection-using-deep-learning.pdf For more information refer builders.intel.com/ai/solutionslibrary Software and workloads used in performance tests may have been optimized forperformance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary.You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit:http://www.intel.com/performance Source: Refer builders.intel.com/ai/solutionslibrary

Page 8: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

ucsfGe

healthcarenovartis Aier eye

Zhejiang university cybraics accuhealth

Automated MRI classification

system

CT scan classification

radiology workflow

Accelerating image analysis for drug

discovery

Time-based pattern detection

Thyroid cancer screening

Furthered healthcare security

Predictive analytics for preventative care

Increased accuracy1 Higher inference throughput 2 faster time to train3

Faster loading images, Faster time

to classification4Augmented AI

solution accuracy 5$ savings per patient by thwarting attacks

using AI6

Reduction in ER visits7

AI with intel

1. https://cdn.oreillystatic.com/en/assets/1/event/269/Automatic%203D%20MRI%20knee%20damage%20classification%20with%203D%20CNN%20using%20BigDL%20on%20Spark%20Presentation.pdf 2. https://simplecore.intel.com/nervana/wp-content/uploads/sites/53/2018/04/Intel-Software-Development-Tools-Optimize-Deep-Learning-Performance-for-Healthcare-Imaging-paper-ForDistribution.pdf 3. https://newsroom.intel.com/news/using-deep-neural-network-acceleration-image-analysis-drug-discovery/ 4. https://www.intel.com/content/www/us/en/healthcare-it/solutions/documents/deep-learning-to-blindness-prevention-case-study.html 5. https://builders.intel.com/docs/aibuilders/using-ai-in-medical-imaging-to-improve-accuracy-and-efficiency.pdf 6. https://www.intel.com/content/dam/www/public/us/en/documents/case-studies/strengthening-security-with-cybraics-ai-based-analytics-case-study.pdf 7. https://newsroom.intel.com/articles/solve-healthcare-intel-partners-demonstrate-real-uses-artificial-intelligence-healthcare/ 8. https://www.intel.com/content/www/us/en/financial-services-it/union-pay-case-study.html 9 link 10. https://zivadynamics.com/sundance2018 11. https://blog.gigaspaces.com/gigaspaces-to-demo-with-intel-at-strata-data-conference-and-microsoft-ignite/

Health

Unionpay Credit card provider

Detect fraudulent transactions with more

accuracy

Predict consumer behavior

Increased accuracy8 Higher precision and better recall9 FINANCE

Ziva gigaspacesComputer generated realistic characters

Lower call center latency automated prediction routing

Capacity increase & speedup10

Increase in call center performance 11

other

Page 9: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Hypotheses

Data Prep

Modeling

Deployment

Iteration

1

2

3

5

4

6 Evaluation7

⁻ Define the problem

⁻ Assess data needs

⁻ Collect data⁻ Store data⁻ Extract,

transform & load (ETL)

⁻ Label data⁻ Process data⁻ Test data

⁻ Integrate into workflow

⁻ Build organizational and system level capabilities to scale

The journey to Production AI

⁻ Measure utilization

⁻ Explore need for additional processing hardware

⁻ Understand bottlenecks to scaling

⁻ Assess additional revenue generation models

⁻ Identify software model needs

⁻ Categorize existing resources

⁻ Experiment with models

⁻ Develop a proof of concept

⁻ Train model⁻ Measure

performance⁻ Evaluate system

costs

⁻ Increase robustness of model with additional data

⁻ Re-training and refine model

⁻ Tie inference to workflow decision making

⁻ Optimize for end-user experience

TOTAL TIME TO SOLUTION

Source: Intel estimates based on customer data

Frame Opportunity

Page 10: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deep learning in practice

Source: Intel estimates based on customer data

Defectdetection

15% 15% 23% 15% 15% 8% 8%

Experiment with topologies

Tune hyper-parameters

Document resultsLabel data Load data Augment data

Support inference

inputs

Compute-intensive(Model Training)

Labor-intensive Labor-intensive

Frame OpportunityHypotheses

Data PreparationModeling

DeploymentIteration

Evaluation

1

2

3

4

5

6

7

Source Data Inferencing Inference within broader application

15%

15%

23%

15%

15%

8% 8%

Development Cycle

∞ ∞

TOTAL TIME TO SOLUTION

Page 11: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

† Formerly the Intel® Computer Vision SDK*Other names and brands may be claimed as the property of others.Developer personas show above represent the primary user base for each row, but are not mutually-exclusiveAll products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.

TOOLKITSApp

Developers

librariesData

Scientists

foundationLibrary

Developers

DEEP LEARNING DEPLOYMENTOpenVINO™ Intel® Movidius™ SDK

Open Visual Inference & Neural Network Optimization toolkit for inference deployment

on CPU/GPU/FPGA for TF, Caffe* & MXNet*

Optimized inference deployment on Intel VPUs for

TensorFlow* & Caffe*

DEEP LEARNING FRAMEWORKSNow optimized for CPU Optimizations in progress

TensorFlow*. MXNet* Caffe* BigDL* Caffe2* PyTorch* CNTK* PaddlePaddle*

DEEP LEARNINGIntel® Deep

Learning Studio‡Open-source tool to compress

deep learning development cycle

REASONINGIntel® Saffron™ AI

Cognitive solutions on CPU for anti-money laundering,

predictive maintenance, more

MACHINE LEARNING LIBSPython R DistributedScikit-learnPandasNumPy

CartRandomForeste1071

MlLib (on Spark)Mahout

ANALYTICS, MACHINE & DEEP LEARNING PRIMITIVES

Python DAAL MKL-DNN clDNNIntel distribution

optimized for machine learning

Intel® Data Analytics Acceleration Library (incl

machine learning)

Open-source deep neural network functions for

CPU / integrated graphics

DEEP LEARNING GRAPH COMPILER

Intel® nGraph™ Compiler (Alpha)Open-sourced compiler for deep learning model computations optimized for multiple devices (CPU, GPU, NNP) from multiple

frameworks (TF, MXNet, ONNX)

SOFTWARE Portfolio of software tools to accelerate time to solution

Page 12: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.

Deep Learning

Training

Inference

AI

HARDWAREMainstream intensive

Multi-purpose to purpose-built AI compute for datacenter to edge

IntelGNA

(IP)

Most other

Page 13: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deep learning

in data centers

Page 14: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

data centers - Flexibility and Scalability

Source: Facebook, Inc. Paper – Applied Machine Learning at Facebook: A Datacenter Infrastructure PerspectiveThe Register® https://www.theregister.co.uk/2017/05/22/cloud_providers_ai_researchers/ AWS Pricing Analysis – Screen Capture 23 May 2017

Re-provision resources when AI developers do not need system

access

Demand load across Facebook’s fleet over a 24 hour period on 19 September 20172

Provide access to multiple nodes through scalable performance when compute needs come in

On-demand price

NIPS Deadline

AWS Pricing Trends for p2.16xLarge Instance4

Achieving PUE close to 1

Performance at scale

Efficient asset utilization

Data Center Priorities

Page 15: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

The abundance of readily-available CPU capacity makes it a useful platform for both training and inference. This is especially true during the off-peak portions of the diurnal cycle where CPU resources would otherwise sit idle.

Cpu for inference & training at facebook

Source: https://research.fb.com/wp-content/uploads/2017/12/hpca-2018-facebook.pdf

Services Ranking Algorithm

Photo Tagging

Photo Text Generation

Search Language translation

Spam Flagging

Speech Recognition

Training Resource

CPU GPU & CPU GPU Depends GPU CPU GPU

Models MLP SVM,CNN CNN MLP RNN GBDT RNN

InferenceResource

CPU CPU CPU CPU CPU CPU CPU

Training Frequency

Daily Every N photos

Multi-Monthly

Hourly Weekly Sub-Daily Weekly

Training Duration

Many Hours Few Seconds

Many Hours Few Hours Days Few Hours Many Hours

Page 16: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deep learning Scenario – best effort training94% scaling efficiency on tensorflow 1

distributed training - with

74% top-1 High accuracy

https://cdn.oreillystatic.com/en/assets/1/event/269/Automatic%203D%20MRI%20knee%20damage%20classification%20with%203D%20CNN%20using%20BigDL%20on%20Spark%20Presentation.pdf

1 - Configuration Details 68Performance results are based on testing as of April 30th 2018 and may not reflect all publicly available security update. See configuration disclosure for details. No product can be absolutely secure.Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee theoperations and functions. Any change to any of those factors may cause the reavailability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certainoptimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may havebeen optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, sults to vary. You should consult other information and performance tests to assist you in fully evaluating yourcontemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance

Page 17: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deep learning Scenario – dedicated training

https://newsroom.intel.com/news/using-deep-neural-network-acceleration-image-analysis-drug-discovery/

43% performance on caffe Resnet50 1

training throughput- since launch July 2017

1 - Configuration Details 14, 69Performance results are based on testing as of July 30th 2018(1.43X), May 25th 2018 (Novartis) and may not reflect all publicly available security update. See configuration disclosure for details. No product can be absolutely secure.Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee theavailability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intelmicroprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fullyevaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance

Page 18: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deep learning Scenario – batch INFERENCE14X higher inference Results

with latest mkl-dnn - on

nmt compared to without mkl use1

https://simplecore.intel.com/nervana/wp-content/uploads/sites/53/2018/04/Intel-Software-Development-Tools-Optimize-Deep-Learning-Performance-for-Healthcare-Imaging-paper-ForDistribution.pdf

1 - Configuration Details 34, 35Performance results are based on testing as of April 18th 2018(NMT), June 15th 2018 (GE result remeasured) and may not reflect all publicly available security update. See configuration disclosure for details. No product can be absolutely secure.Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee theavailability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intelmicroprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fullyevaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance

Page 19: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deep learning Scenario – STREAM INFERENCE

https://arxiv.org/pdf/1705.07538.pdfhttps://dawn.cs.stanford.edu/

‘For ImageNet inference, Intel submitted the best result in both cost and latency. Using an Intel optimized version of Caffe on high performance AWS

instances, they reduced per image latency to 9.96 milliseconds and processed 10,000 images for $0.02’

TOP3 BEST Inference results

from Intel –dawnbench measured

in FEW Milli seconds1

https://blog.gigaspaces.com/gigaspaces-to-demo-with-intel-at-strata-data-conference-and-microsoft-ignite/

1 - Configuration Details 38, 39Performance results are based on testing as of April 20th 2018 (dawnbench) and may not reflect all publicly available security update. See configuration disclosure for details. No product can be absolutely secure.Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee theavailability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intelmicroprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fullyevaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance

Page 20: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Intel ai

focus pillars

Page 21: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Accelerate DeploymentsMaximize Performance Innovate hardware solutions

focus pillars

Through continuous software optimizations to libraries and

frameworks

By architecting innovative solutions to improve underlying hardware

capabilities for AI

Speed up customer deployments through turn-

key solutions for AI applications

Ecosystem enablement

Partner with customers & developers on their AI journey

to develop end to end solutions from edge to cloud

Page 22: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Maximize Performance

Intel ai

focus pillars

Page 23: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

131226

453

652800

11101225

0

200

400

600

800

1000

1200

1400

July 11, 2017 July 11, 2017 December 14, 2017 January 19, 2018 April 3, 2018 May 8, 2018 May 8, 2018

FP32 FP32 FP32 FP32 Int8 Int8 Int8

BS = 64 BS = 64 BS = 64 BS = 64 BS = 64 BS = 64 BS = 64, 4 instances

2S Intel® Xeon®Processor 2699 V4

2S Intel® Xeon® ScalablePlatinum 8180 Processor

2S Intel® Xeon® ScalablePlatinum 8180 Processor

2S Intel® Xeon® ScalablePlatinum 8180 Processor

2S Intel® Xeon® ScalablePlatinum 8180 Processor

2S Intel® Xeon® ScalablePlatinum 8180 Processor

2S Intel® Xeon® ScalablePlatinum 8180 Processor

Inference Throughput Images/Second on Intel Caffe ResNet50

(1) Up to 5.4X performance improvement with software optimizations on Caffe Resnet-50 in 10 months with 2 socket Intel® Xeon® Scalable Processor, Configuration Details 1, 14, 15, 41. Performance results are based on testing as of July 11th 2017 to May 8th 2018 as shown above and may not reflect all publicly available security update. See configuration disclosure for details. No product can be absolutely secure.

Higher is Better

Intel® Xeon® processor AI Performance

5.4X(1)

In 13 months since Intel® Xeon®

Scalable Processor launch

Imag

es/S

econ

d

Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does notguarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture arereserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only onIntel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information andperformance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance Source: Intel measured as of April 2018.

Page 24: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

USE OPTIMIZED MKL-DNN LIBRARY...

......

...

USE OPTIMIZED

FRAME-WORKS

+

Enable multiple streams

+

SCALE-out to MULTIPLE

NODES+

Page 25: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Layer Fusion ExampleOptimized software : mkl-dnn library

Page 26: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Winograd Algorithm to improve matrix multiply performance

i0 i1 i2

i1 i2 i3

F0

F1

F2

Y0

Y1Y0 = i0*F0 + i1*F1 + i2*F2Y1= i1*F1 + i2*F2 + i3*F3

Y0 = X0 + X1 + X2Y1= X1 - X2 - X3

X0 = (i0-i2)*F0X1 = (i1+i2)*(F0+F1+F2)/2

X2 = (i1-i3)*F2

X3 = (i2-i1)*(F0-F1+F2)/2

Normal Matrix Multiply 6 multiplications

Winograd Matrix Multiply

4 multiplications

Compute Operations decreased, memory accesses increased

Optimized software : mkl-dnn library

Page 27: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

USE OPTIMIZED MKL-DNN LIBRARY...

......

...

USE OPTIMIZED

FRAME-WORKS

+

Enable multiple streams

+

SCALE-out to MULTIPLE

NODES+

Page 28: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Framework optimizations

Important to use optimized software frameworks and libraries for best AI workload performance

Example: Load Balancing:TensorFlow graphs offer opportunities for parallel execution. Threading model

1. inter_op_parallelism_threads: Max number of operators that can be executed in parallel2. intra_op_parallelism_threads: Max number of threads to use for executing an operator3. OMP_NUM_THREADS: MKL-DNN equivalent of intra_op_parallelism_threads

Page 29: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

USE OPTIMIZED MKL-DNN LIBRARY...

......

...

USE OPTIMIZED

FRAME-WORKS

+

Enable multiple streams

+

SCALE-out to MULTIPLE

NODES+

Page 30: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Multiple Inference Streams

Recommend using multiple framework instancesEach framework instance is pinned to a separate NUMA domain

Each instance processes a separate Inference Stream

Inference Stream-1 BS=X 1/4 Skt-0

Inference Stream-0 BS=X 1/4 Skt-0

Optimizations at run time without framework code change

Best Known Methods: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations/

1/4 Skt-0Inference Stream-2 BS=X

1/4 Skt-0Inference Stream-3 BS=X

1/4 Skt-1 Inference Stream-4 BS=X

1/4 Skt-1Inference Stream-6 BS=X

1/4 Skt-1 Inference Stream-7 BS=X

1/4 Skt-1 Inference Stream-5 BS=X

Page 31: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

USE OPTIMIZED MKL-DNN LIBRARY...

......

...

USE OPTIMIZED

FRAME-WORKS

+

Enable multiple streams

+

SCALE-out to MULTIPLE

NODES+

Page 32: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Training: Multi-Workers per Socket, scaleout

Each framework instance is pinned to a separate NUMA domain; Each CPU running 1 or more workers/nodeUses optimized MPI library for gradient updates over shared memory

Caffe – Use Optimized Intel® MPI ML Scaling Library (ML-SL)TensorFlow – Uber horovod MPI Library

14 Cores½-Skt-0Worker-0

14 Cores½-Skt-0Worker-2

14 Cores½-Skt-1 Worker-1

14 Cores½-Skt-1 Worker-3

Intel Best Known Methods: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations/Optimizations at run time without framework code change

Interconnect Fabric (Intel® OPA or Ethernet)

Node 1 Node NNode 2Distributed Deep Learning Training Across Multiple nodes; Each node running multiple workers/node; Uses optimized MPI library for gradient updates over network fabric

Caffe – Use Optimized Intel® MPI ML Scaling Library (ML-SL)TensorFlow – Uber horovod MPI Library

Page 33: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

3X higher Inference Performance by using mkl-dnn libraries + Optimized Framework

Optimized Frameworks

Optimized Intel® MKL Libraries

MultipleStreams

+5X higher inference performance by using multiple streams

On MxNet Amazon* C5 (Intel® Xeon® Scalable Processor) running NMT1(German to English) with MKL DNN libraries comparing with and without multiple streams

On MxNet Amazon* C5 (Intel® Xeon® Scalable Processor) running NMT1(German to English) with and without MKL DNN libraries

Configuration Details 34, 35Performance results are based on testing as of April 18th 2018(NMT) and may not reflect all publicly available security update. See configuration disclosure for details. No product can be absolutely secure.Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee theavailability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intelmicroprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fullyevaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance

Page 34: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Innovate Hardware solutions

Intel ai

focus pillars

Page 35: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

training enhancementsEmbedded acceleration with avx-512

AVX-512 Instructions bring embedded acceleration for AI on Intel® Xeon® Scalable processors

FP32

Page 36: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Inference enhancementsVector Neural Network Instructions

Low Precision Instructions bring embedded acceleration for AI on Intel® Xeon® Scalable processors

** vpmaddubsw, vpmaddwd, vpaddd vpdpbusd

FUTUREVNNI

INT8

Page 37: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Accelerate deployments

Intel ai

focus pillars

Page 38: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Deploying TensorFlow With Singularity Install Singularity on the Infrastructure Nodes− https://singularity.lbl.gov/install-linux - as root/sudo

Build a TensorFlow Singularity Image: tf_singularity.simg− Build an image comprising of− Linux OS− Optimized TensorFlow*− Horovod Communication MPI library− TensorFlow Application

Running Application− singularity exec tf_singularity.img \

python /<path-to-benchmarks>/tf_cnn_benchmarks/tf_cnn_benchmarks.py --model resnet50 \--batch_size 64 --data_format NCHW --num_batches 1000 --distortions=True --mkl=True --

device cpu \--num_intra_threads $OMP_NUM_THREADS --num_inter_threads 2 --kmp_blocktime=0

* Other names and brands may be claimed as the property of others

Page 39: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Ecosystemenablement

Intel ai

focus pillars

Page 40: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Ai ecosystem & research

Computational Intelligence

Deep Learning Architecture

Experiential Computing

Heterogenous architectures for adaptive andalways learning devices with NLP andconversational understanding capabilitiesand visual applications

3D scene understanding using DL basedanalysis of large video databases, ComputerVision in the cloud – enable effective datamining of large collections of Video

Always-on audio-visual multi-modalinteraction , Self configuring audio-visualhierarchical sensing through approximatecomputing data path

Create an instrument to connect humanbehavior to brain function, toolkits for theanalysis of brain function , Real-time cloudservices for neuroimaging analysis Applyingneuroscientific insights to AI

Built-in safety mechanisms for wide spreadmission critical use, ascertaining theconfidence and removing anomalous andout of distribution samples in autonomousdriving, medicine, security

Large-scale systems research for scaling outvisual applications and data, large-scalevideo analysis

Deep Learning hardware and softwareadvancements, scaling to very large clustersand new applications

Visual Cloud Systems

Approximate Computing

Security in Deep Learning

Brain Science

100+University

engagements

Establish Thought Leadership, Innovate on Intel Architecture, Synergize Intel Product & Research

(1) https://newsroom.intel.com/editorials/intel-invests-1-billion-ai-ecosystem-fuel-adoption-product-innovation/

Intel Invests

$1+ Billion

in the AI Ecosystem to Fuel Adoption and

Product Innovation(1)

Page 41: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

*Other names and brands may be claimed as the property of others.All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.

solutions

Intel AI Builders

builders.intel.com/ai/membership

Your one-stop-shop to find systems, software and solutions using Intel® AI technologies

Reference solutions

builders.intel.com/ai/solutionslibrary

Get a head start using the many case studies, solution briefs and more reference collaterals spanning multiple applications

Partner ecosystem to facilitate AI infinance, health, retail, industrial & more

Page 42: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Advancing AI Performance with Intel® Xeon® Scalable

Time to solution for production ai

Maximize performance use optimized sw

Workload flexibility & scalability

Journey to Production AI

Deep Learning in Data Centers

Intel AI Focus Pillars

Develop Today! builders.intel.com/ai/devcloud

Page 43: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

扫描上方二维码本课程积3学分

Scan QR code to accumulate points.

This session qualifies you for three points.

Page 44: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems
Page 45: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Advancing AI Performance with Intel® Xeon® Scalable

Time to solution is key for production ai

software.intel.com/en-us/ai-academy

builders.intel.com/ai/devcloud

Maximize performance with optimized sw

Engage, contribute & develop ai on IA

builders.intel.com/ai/solutionslibrary

Page 46: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems
Page 47: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

51

Library Commercial Product Version

Opens source version

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)

MKL-DNN Github

Intel® Math Kernel Library (Intel® MKL) Intel MKL Product site

Intel® Data Analytics Acceleration Library (Intel® DAAL) Intel DAAL Product Site Intel DAAL Github

Intel® Integrated Performance Primitives (Intel® IPP) Intel IPP Product Site

optimized libraries

Page 48: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Optimized FrameworksFramework How to Access Optimized

FrameworkTensorFlow Install Intel optimized wheel, see tensorflow.org page

for CPU optimization instructions

MXNet Intel optimizations in main branch via experimental path, available here

Caffe2 Will upstream to master branch in Q2

PaddlePaddle Paddle Paddle master branch

PyTorch Intel optimizations available in this branch

Caffe Intel optimized version of Caffe

CNTK CNTK master branch

Page 49: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Gathering and Installing relevant Software tools: 1. OpenMPI can be installed via yum on recent versions of CentOS. Some existing clusters already have available OpenMPI. In this blog, we will use OpenMPI 3.0.0. OpenMPI can be installed following instructions in the following link: https://www.open-mpi.org/software/ompi/v3.0/

2. Latest GCC version is needed. At least, GCC 6.2 or newer versions is recommended. For the latest installation, following link can be used: https://gcc.gnu.org/ .3. Python 2.7 and 3.6 both tested.4. Uber Horovod supports running TensorFlow in distributed fashion. Horovod can be installed as a standalone python package as follows: pip install --no-cache-dir horovod (e.g. horovod-0.11.3) Please check the following link to install Horovod from source: https://github.com/uber/horovod5. The current TensorFlow benchmarks are recently modified to use Horovod. You can obtain these benchmarks code from GitHub: git clone https://github.com/tensorflow/benchmarks cd benchmarks/scripts/tf_cnn_benchmarksRun tf_cnn_benchmarks.py as explained below. Running TensorFlow Benchmark using Horovod with TensorFlow Here, we discuss run commands needed to run distributed TensorFlow using Horovod framework. For hardware platform, we use dual-socket Intel® Xeon® Gold 6148 processor-based cluster system. For networking 10GB ethernet is used. Mellanox infiniband or Intel® Omni-Path Architecture (Intel® OPA) also can be used for networking the cluster. Running 2 MPI processes on single node:export LD_LIBRARY_PATH=<path to OpenMP lib>:$LD_LIBRARY_PATH export PATH=<path to OpenMPI bin>:$PATHexport inter_op=2export intra_op=18 {# cores per socket}export batch_size=64export MODEL=resnet50 {or inception3}export python_script= {path for tf_cnn_benchmark.py script} mpirun -x LD_LIBRARY_PATH -x OMP_NUM_THREADS -cpus-per-proc 20 --map-by socket --overscribe --report-bindings -n 2 python $python_script --mkl=True --forward_only=False --num_batches=200 --kmp_blocktime=0 --num_warmup_batches=50 --num_inter_threads=$inter_op --distortions=False --optimizer=sgd --batch_size=$batch_size --num_intra_threads=$intra_op --data_format=NCHW --model=$MODEL --variable_update horovod --horovod_device cpu --data_dir <path-to-real-dataset> --data_name <dataset_name>

For 1 MPI process per node, the configuration will be as follows. The other environment variables will be the same.export intra_op=38export batch_size=128 mpirun -x LD_LIBRARY_PATH -x OMP_NUM_THREADS --bind-to none --report-bindings -n 1 python $python_script --mkl=True --forward_only=False --num_batches=200 --kmp_blocktime=0 --num_warmup_batches=50 --num_inter_threads=$inter_op --distortions=False --optimizer=sgd --batch_size=$batch_size --num_intra_threads=$intra_op --data_format=NCHW --model=$MODEL --variable_update horovod --horovod_device cpu --data_dir <path-to-real-dataset> --data_name <dataset_name> Please note that if you want to train models to achieve good accuracy please use --distortions=True. You may also need to changethe other hyper-parameters. For running model on multi-node cluster we will use very similar run script as the above. For example, to run on 64-node (2 MPI per node), where each node is Intel Xeon Gold 6148, the distributed training can be launched as shown in the below. All the export lists will be the same as above. mpirun -x LD_LIBRARY_PATH -x OMP_NUM_THREADS -cpus-per-proc 20 --map-by node --report-bindings -hostfile host_names -n 128 python $python_script --mkl=True --forward_only=False --num_batches=200 --kmp_blocktime=0 --num_warmup_batches=50 --num_inter_threads=$inter_op --distortions=False --optimizer=sgd --batch_size=$batch_size --num_intra_threads=$intra_op --data_format=NCHW --model=$MODEL --variable_update horovod --horovod_device cpu --data_dir <path-to-real-dataset> --data_name <dataset_name> Here, the host_names file is the list of hosts on which you wish to run the workload. What Distributed TensorFlow means for Deep Learning Training on Intel Xeon Various efforts were taken to implement distributed TensorFlow on CPU and GPU. For example, gRPC, VERBS, TensorFlow built in MPI. All of these technologies are incorporated within TensorFlow codebase. Uber Horovod is one distributed TensorFlow technology that was able to harness the power of Intel Xeon. It uses MPI underneath, and uses Ring based reduction and gather for Deep Learning parameters. As shown above, Horovod on Intel Xeon shows great scaling for existing DL benchmark models, such as Resnet 50 (up to 94%) and Inception v3 (up to 89%) for 64 nodes. In other words, time to train a DL network can be accelerated by as much as 57x (resnet 50) and 58x (inception V3) using 64 Xeon nodes comparing to a single Xeon node. Thus, currently Intel recommends TensorFlow users use Intel-optimized TensorFlow and Horovod MPI for multi-node training on Intel® Xeon® Scalable Processors. Acknowledgements: The authors would like to thank Vikram Saletore, Mikhail Smorkalov, and Srinivas Sridharan for their collaboration with us on using Horovod with TensorFlow.

[1] Performance is reported at https://ai.intel.com/amazing-inference-performance-with-intel-xeon-scalable-processors/[2] The results are reported at https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture[3] The updated results is in https://ai.intel.com/tensorflow-optimizations-intel-xeon-scalable-processor/[4] Refer to https://github.com/01org/mkl-dnn for more details on Intel® MKL-DNN optimized primitives[5] System configuration: CPU: Intel Xeon Gold 6148 CPU @ 2.40GHz ; OS: Red Hat Enterprise Linux Server release 7.4 (Maipo); TensorFlow Source Code: https://github.com/tensorflow/tensorflow ; TensorFlow Commit ID: 024aecf414941e11eb643e29ceed3e1c47a115ad . Detailed configuration is as follows:

CPU Thread(s) per core: 2 Core(s) per socket: 20 Socket(s): 2 NUMA node(s): 2

CPU family: Model:Model name: Stepping: HyperThreading: Turbo: Memory 192GB (12 x 16GB) 2666MT/s Disks 685Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz 4ONON Intel RS3WC080 x 3 (800GB, 1.6TB, 6TB)(?) BIOS SE5C620.86B.00.01.0013.030920180427 OS Red Hat Enterprise Linux Server release 7.4 (Maipo) Kernel 3.10.0-693.21.1.0.1.el7.knl1.x86_64

Config 68

Page 50: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

54

Skylake AI Configuration Details as of July 11th, 2017Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupowerfrequency-set -d 2.5G -u 3.8G -g performance

Deep Learning Frameworks:

- Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“.

- TensorFlow: (https://github.com/tensorflow/tensorflow), commit id 207203253b6f8ea5e938a512798429f91d5b4e7e. Performance numbers were obtained for three convnet benchmarks: alexnet, googlenetv1, vgg(https://github.com/soumith/convnet-benchmarks/tree/master/tensorflow) using dummy data. GCC 4.8.5, Intel MKL small libraries version 2018.0.20170425, interop parallelism threads set to 1 for alexnet, vgg benchmarks, 2 for googlenet benchmarks, intra op parallelism threads set to 56, data format used is NCHW, KMP_BLOCKTIME set to 1 for googlenet and vgg benchmarks, 30 for the alexnet benchmark. Inference measured with --caffe time -forward_only -engine MKL2017option, training measured with --forward_backward_only option.

- MxNet: (https://github.com/dmlc/mxnet/), revision 5efd91a71f36fea483e882b0358c8d46b5a7aa20. Dummy data was used. Inference was measured with “benchmark_score.py”, training was measured with a modified version of benchmark_score.py which also runs backward propagation. Topology specs from https://github.com/dmlc/mxnet/tree/master/example/image-classification/symbols. GCC 4.8.5, Intel MKL small libraries version 2018.0.20170425.

- Neon: ZP/MKL_CHWN branch commit id:52bd02acb947a2adabb8a227166a7da5d9123b6d. Dummy data was used. The main.py script was used for benchmarking , in mkl mode. ICC version used : 17.0.3 20170404, Intel MKL small libraries version 2018.0.20170425.

Config 14Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.

Page 51: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Config 69

Framework

caffeBranch: origin/master

Version: a3d5b022fe026e9092fc7abc7654b1162ab9940d

Instances 1

Sockets 2

Platform SKX_8180

Processor Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores

BIOS SE5C620.86B.00.01.0013.030920180427

Enabled Cores 56

Slots 12Total Memory 376.46GBMemory Configuration 12slots / 32 GB / 2666 MHzMemory Comments Micron

Disks sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB

OS CentOS Linux-7.3.1611-Core kernel 3.10.0-693.11.6.el7.x86_64

Hyper Threading ONTurbo ONTopology resnet_50_v2Batchsize 64Dataset NoDataLayerData type FP32

Engine

MKLDNN

Version: 464c268e544bae26f9b85a2acb9122c766a4c396

Config for training: measures 8/2/2018. 123.77 imgs/sec

Page 52: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Configuration details of Amazon EC2 C5.18xlarge 1 node systemsBenchmark Segment AI/ML

Benchmark type Inference

Benchmark Metric Sentence/Sec

Framework Official mxnet

Topology GNMT(sockeye)

# of Nodes 1

Platform Amazon EC2 C5.18xlarge instance

Sockets 2S

Processor Intel® Xeon® Platinum 8124M CPU @ 3.00GHz(Skylake)

BIOS N/A

Enabled Cores 18 cores / socket

Platform N/A

Slots N/A

Total Memory 144GB

Memory Configuration N/A

SSD EBS Optimized200GB, Provisioned IOPS SSD

OS Red Hat 7.2 (HVM) Amazon Elastic Network Adapter (ENA) Up to 10 Gbps of aggregate network bandwidth

Network Configurations Installed Enhanced Networking with ENA on Centos Placed the all instances in the same placement

HT ON

Turbo ON

Computer Type Server

Config 34Amazon security patches: https://alas.aws.amazon.com/ALAS-2018-939.html https://alas.aws.amazon.com/ALAS-2018-1039.html

Page 53: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Framework Version mxnet mkldnn : https://github.com/apache/incubator-mxnet/4950f6649e329b23a1efdc40aaa25260d47b4195

Topology Version GNMT: https://github.com/awslabs/sockeye/tree/master/tutorials/wmt

Batch size GNMT:1 2 8 16 32 64 128

Dataset, version GNMT: WMT 2017 (http://data.statmt.org/wmt17/translation-task/preprocessed/)

MKLDNN F5218ff4fd2d16d13aada2e632afd18f2514fee3

MKL Version: parallel_studio_xe_2018_update1http://registrationcenterdownload.intel.com/akdlm/irc_nas/tec/12374/parallel_studio_xe_2018_update1_cluster_edition_online.tgz

Compiler g++: 4.8.5gcc: 7.2.1

Configuration details of Amazon EC2 C5.18xlarge 1 node systems

Config 35

Amazon security patches: https://alas.aws.amazon.com/ALAS-2018-939.html https://alas.aws.amazon.com/ALAS-2018-1039.html

Page 54: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Dawnbench Configurations

EC2 Instancetype

Machine Type vCPU (#s)

Memory (GiB) Disk(GB) Storage(mbps, EBS bandwidth)

Ethernet(Gigabit)

Price($ per Hour)

C5.2xlarge SKX 8124M4 cores3 GHz basefrequency

8 16 128 Up to 2,250 Up to 10 0.34

C5.4xlarge SKX 8124M8 cores3 GHz base frequency

16 32 128 2,250 Up to 10 0.68

C5.18xlarge SKX 8124M2S x 18 cores3 GHz base frequency

72 144 128 9,000 25 3.06

Config 38

Amazon security patches: https://alas.aws.amazon.com/ALAS-2018-939.html https://alas.aws.amazon.com/ALAS-2018-1039.html

Page 55: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Inference: INT8 Resnet50 with 15% pruning, 53% performance gain over FP32

Based on 93.3% accuracy FP32 Resnet50 topology

Pure INT8 except for first convolution layer: <0.2% accuracy drop, 43% performance gain over FP32

15% filter pruned according to KL distance: <0.1% accuracy drop, 15% performance gain over FP32

Dawnbench IntelCaffe Inference topology

Config 39

Amazon security patches: https://alas.aws.amazon.com/ALAS-2018-939.html https://alas.aws.amazon.com/ALAS-2018-1039.html

Page 56: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Configuration Details 1 Dec 14th 2017Benchmark Segment AI/MLBenchmark type Training/InferenceBenchmark Metric Images/Sec or Time to train in secondsFramework CaffeTopology# of Nodes 1Platform PurleySockets 2S

Processor Xeon Platinum, 205W, 28 core, 2.5 GHz

BIOS SE5C620.86B.01.00.0470.040720170855Enabled Cores 28PlatformSlots 12Total Memory 192GB (96 GB per socket)Memory Configuration 6 slots/16 GB/2666 Mt/s DDR4 RDIMMsMemory Comments Micron (18ASF2G72PDZ-2G6D1)SSD INTEL SSDSC2KW48 480 GBOS RHEL Server Release 7.2 (Maipo), Linux kernel 3.10.0-327.el7.x86_64OS\Kernel CommentsOther ConfigurationsHT ONTurbo ONComputer Type Server

Framework Version http://github.com/intel/caffe/ c7ed32772affaf1d9951e2a93d986d22a8d14b88 (release_1.0.6)Topology Version, BATCHSIZE best (resnet -- 50, gnet_v3 -- 224, ssd -- 224)Dataset, version ImageNet / DummyData layer

Performance command Inference measured with “caffe time --forward_only -phase TEST” command, training measured with “caffe train” command.

Data setup DummyData layer (generating dummy images on the fly)Compiler Intel C++ compiler ver. 17.0.5 20171101MKL Library version Intel MKL small libraries version 2018.0.1.20171007MKL DNN Library Version -

Performance Measurement Knobs Environment variables: KMP_AFFINITY='granularity=fine, compact,1,0‘, OMP_NUM_THREADS=28, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance

Memory knobs Caffe run with “numactl -l“.

Config 1

Page 57: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

61

Broadwell AI Configuration Details as of July 11th, 2017Platform: 2S Intel® Xeon® CPU E5-2699 v4 @ 2.20GHz (22 cores), HT enabled, turbo disabled, scaling governor set to “performance” via acpi-cpufreq driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3500 Series (480GB, 2.5in SATA 6Gb/s, 20nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact,1,0‘, OMP_NUM_THREADS=44, CPU Freq set with cpupower frequency-set -d 2.2G -u 2.2G -g performance

Deep Learning Frameworks:

- Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, Intel MKL small libraries version 2017.0.2.20170110.

- TensorFlow: (https://github.com/tensorflow/tensorflow), commit id 207203253b6f8ea5e938a512798429f91d5b4e7e. Performance numbers were obtained for three convnet benchmarks: alexnet, googlenetv1, vgg(https://github.com/soumith/convnet-benchmarks/tree/master/tensorflow) using dummy data. GCC 4.8.5, Intel MKL small libraries version 2018.0.20170425, interop parallelism threads set to 1 for alexnet, vgg benchmarks, 2 for googlenet benchmarks, intra op parallelism threads set to 44, data format used is NCHW, KMP_BLOCKTIME set to 1 for googlenet and vgg benchmarks, 30 for the alexnet benchmark. Inference measured with --caffe time -forward_only -engine MKL2017option, training measured with --forward_backward_only option.

- MxNet: (https://github.com/dmlc/mxnet/), revision e9f281a27584cdb78db8ce6b66e648b3dbc10d37. Dummy data was used. Inference was measured with “benchmark_score.py”, training was measured with a modified version of benchmark_score.py which also runs backward propagation. Topology specs from https://github.com/dmlc/mxnet/tree/master/example/image-classification/symbols. GCC 4.8.5, Intel MKL small libraries version 2017.0.2.20170110.

- Neon: ZP/MKL_CHWN branch commit id:52bd02acb947a2adabb8a227166a7da5d9123b6d. Dummy data was used. The main.py script was used for benchmarking , in mkl mode. ICC version used : 17.0.3 20170404, Intel MKL small libraries version 2018.0.20170425.

Config 15Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.

Page 58: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

62

Skylake AI Configuration Details as of July 11th, 2017Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupowerfrequency-set -d 2.5G -u 3.8G -g performance

Deep Learning Frameworks:

- Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“.

- TensorFlow: (https://github.com/tensorflow/tensorflow), commit id 207203253b6f8ea5e938a512798429f91d5b4e7e. Performance numbers were obtained for three convnet benchmarks: alexnet, googlenetv1, vgg(https://github.com/soumith/convnet-benchmarks/tree/master/tensorflow) using dummy data. GCC 4.8.5, Intel MKL small libraries version 2018.0.20170425, interop parallelism threads set to 1 for alexnet, vgg benchmarks, 2 for googlenet benchmarks, intra op parallelism threads set to 56, data format used is NCHW, KMP_BLOCKTIME set to 1 for googlenet and vgg benchmarks, 30 for the alexnet benchmark. Inference measured with --caffe time -forward_only -engine MKL2017option, training measured with --forward_backward_only option.

- MxNet: (https://github.com/dmlc/mxnet/), revision 5efd91a71f36fea483e882b0358c8d46b5a7aa20. Dummy data was used. Inference was measured with “benchmark_score.py”, training was measured with a modified version of benchmark_score.py which also runs backward propagation. Topology specs from https://github.com/dmlc/mxnet/tree/master/example/image-classification/symbols. GCC 4.8.5, Intel MKL small libraries version 2018.0.20170425.

- Neon: ZP/MKL_CHWN branch commit id:52bd02acb947a2adabb8a227166a7da5d9123b6d. Dummy data was used. The main.py script was used for benchmarking , in mkl mode. ICC version used : 17.0.3 20170404, Intel MKL small libraries version 2018.0.20170425.

Config 14Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.

Page 59: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Benchmark Metric images/s images/s images/s images/s images/s images/sFramework Caffe Tensorflow MxNet Caffe Tensorflow MxNet# of NodesPlatform 2699v4 2699v4 2699v4 8180 8180 8180Sockets 2 2Processor Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHzBIOS SE5C610.86B.01.01.1016.051620161452 SE5C620.86B.0X.01.0017.070520171742QDF (Indicate QS / ES2)Enabled Cores 22 28Slots 8 / 24 12 / 24Total Memory 128 GiB 384 GiBMemory Configuration 8 slots / 16 GiB / DDR4 / rank 2 / 2134 MHz 12 slots / 32 GiB / rank 2 / 2666 MHzMemory Comments Kingston MicronSSD 1000GB SSD + 240GB SSD 2000 GB SSDOS CentOS Linux-7.3.1611-Core CentOS Linux release 7.3.1611 (Core)HT OFF ONTurbo ON ONComputer Type Server Server

Framework Version

16f8c2b7ebdd6a9bbd45ae6d151927a965a6771d

a9ec703107df4300e900c0203b07708d94e3cf1a

0568d7ac3d961e6de3c73d3632c2d63a892f46a0

16f8c2b7ebdd6a9bbd45ae6d151927a965a6771d

a9ec703107df4300e900c0203b07708d94e3cf1a

0568d7ac3d961e6de3c73d3632c2d63a892f46a0

Topology Version, BATCHSIZE as in table as in table as in table as in table as in table as in tableDataset, version No data layer No data layer No data layer No data layer No data layer No data layer

Performance command

caffe time --model [model].prototxt -iterations 100 -engine=MKL2017/MKLDNN -forward_only

run_single_node_benchmark.py -m [topology] -c [server] -b [batch]

python [topology]_perf.py --num-epochs 2 --data-train /data/MXNet/imagenet10/imagenet10_train.rec --data-val --forward 1

caffe time --model [model].prototxt -iterations 100 -engine=MKL2017/MKLDNN -forward_only

run_single_node_benchmark.py -m [topology] -c [server] -b [batch]

python [topology]_perf.py --num-epochs 2 --data-train /data/MXNet/imagenet10/imagenet10_train.rec --data-val --forward 1

Compiler gcc 4.8.5 gcc 4.8.5 gcc 4.8.5 gcc 4.8.5 gcc 4.8.5 gcc 4.8.5

MKL Library versionversion: mklml_lnx_2018.0.20170908

version: mklml_lnx_2018.0.20170908

version: mklml_lnx_2018.0.20170908

version: mklml_lnx_2018.0.20170908

version: mklml_lnx_2018.0.20170908

version: mklml_lnx_2018.0.20170908

MKL DNN Library Version

fbd4d7b0e82fef1dc6d06a361c65467045fd231c

fbd4d7b0e82fef1dc6d06a361c65467045fd231c

fbd4d7b0e82fef1dc6d06a361c65467045fd231c

fbd4d7b0e82fef1dc6d06a361c65467045fd231c

fbd4d7b0e82fef1dc6d06a361c65467045fd231c

fbd4d7b0e82fef1dc6d06a361c65467045fd231c

Performance Measurement KnobsOMP_NUM_THREADS=44,

KMP_BLOCKTIME: 1, KMP_SETTINGS: 1, KMP_AFFINITY: granularity=fine,verbose,compact,1,0,

OMP_NUM_THREADS=44, KMP_AFFINITY=granularity=fine,compact,1,0

OMP_NUM_THREADS=44,

KMP_BLOCKTIME: 1, KMP_SETTINGS: 1, KMP_AFFINITY: granularity=fine,verbose,compact,1,0,

OMP_NUM_THREADS=44, KMP_AFFINITY=granularity=fine,compact,1,0

Memory knobs numactl –l numactl -m 1 numactl –l numactl -m 1

Config Details for Skylake and Broadwell Inference throughput (for data as of Jan 19th 2018)

Config 16Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.

Page 60: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Config Details for Skylake Training throughput (march 2018)

Framework

caffe caffe caffe caffe caffe tensorflow tensorflow tensorflow tensorflow tensorflow tensorflow neonbranch: master branch: master branch: master branch: master branch: master branch: master branch: master branch: master branch: master branch: master branch: master branch: master

version: f6d01efbe93f70726ea3796a4b89c612365a63

41

version: f6d01efbe93f70726ea3796a4b89c612365a63

41

version: f6d01efbe93f70726ea3796a4b89c612365a634

1

version: f6d01efbe93f70726ea3796a4b89c612365a634

1

version: f6d01efbe93f70726ea3796a4b89c612365a6341

version: c6a12c77a50778e28de3590f46

18bc2b62f3ecab

version: c6a12c77a50778e28de3590f4618bc2b62f3e

cab

version: 926fc13f7378d14fa7980963c4fe774e5922e33

6

version: 926fc13f7378d14fa7980963c4fe774e5922

e336

version: c6a12c77a50778e28de3590f4618bc2b62

f3ecab

version: c6a12c77a50778e28de3590f46

18bc2b62f3ecab

version: f43cfa2e26f9c84b0f42fcda50b6

b83104623223

Platform SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180 SKX_8180Sockets 2 2 2 2 2 2 2 2 2 2 2 2

Processor

Intel(R) Xeon(R) Platinum 8180

CPU @ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180

CPU @ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180 CPU

@ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180 CPU

@ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180 CPU

@ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180 CPU

@ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180

CPU @ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180

CPU @ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180

CPU @ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180

CPU @ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180 CPU

@ 2.50GHz / 28 cores

Intel(R) Xeon(R) Platinum 8180 CPU

@ 2.50GHz / 28 cores

BIOSSE5C620.86B.00.01.0004.07122017

0215

SE5C620.86B.00.01.0004.07122017

0215

SE5C620.86B.00.01.0004.0712201702

15

SE5C620.86B.00.01.0004.0712201702

15

SE5C620.86B.00.01.0004.07122017021

5

SE5C620.86B.00.01.0004.07122017021

5

SE5C620.86B.00.01.0004.07122017

0215

SE5C620.86B.00.01.0004.07122017

0215

SE5C620.86B.00.01.0004.0712201

70215

SE5C620.86B.00.01.0004.071220

170215

SE5C620.86B.00.01.0004.07122017021

5

SE5C620.86B.00.01.0004.07122017021

5Enabled Cores 56 56 56 56 56 56 56 56 56 56 56 56

Slots 12 12 12 12 12 12 12 12 12 12 12 12Total Memory 376.28GB 376.28GB 376.28GB 376.28GB 376.46GB 376.28GB 376.28GB 376.46GB 376.46GB 376.28GB 376.28GB 376.28GB

Memory Configuration

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

12slots / 32 GB / 2666 MHz

Memory Comments Micron Micron Micron Micron Micron Micron Micron Micron Micron Micron Micron Micron

Disks

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb

RS3WC080 HDD 1.5TB,sdc

RS3WC080 HDD 5.5TB

sda RS3WC080 HDD 744.1GB,sdb

RS3WC080 HDD 1.5TB,sdc

RS3WC080 HDD 5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD

1.5TB,sdc RS3WC080 HDD

5.5TB

sda RS3WC080 HDD 744.1GB,sdb

RS3WC080 HDD 1.5TB,sdc

RS3WC080 HDD 5.5TB

sda RS3WC080 HDD 744.1GB,sdb

RS3WC080 HDD 1.5TB,sdc

RS3WC080 HDD 5.5TB

OS CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core

CentOS Linux-7.3.1611-Core Ubuntu-14.04-trusty

Hyper Threading ON ON ON ON ON ON ON ON ON ON ON ONTurbo ON ON ON ON ON ON ON ON ON ON ON ON

Topology resnet_50_v1 resnet_50_v1 vgg16 vgg16 inception_v3 resnet_50_v1 resnet_50_v1 vgg16 vgg16 inception_v3 inception_v3 resnet_50_v2Batchsize 64 128 64 128 32 64 128 64 128 32 96 64Dataset NoDataLayer NoDataLayer NoDataLayer NoDataLayer NoDataLayer NoDataLayer NoDataLayer Imagenet NoDataLayer NoDataLayer NoDataLayer NoDataLayer

Engine

MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKLDNN MKL2017version: ae00102be506ed0fe2099c6557df2aa88ad57e

c1

version: ae00102be506ed0fe2099c6557df2aa88ad57e

c1

version: ae00102be506ed0fe2099c6557df2aa88ad57ec1

version: ae00102be506ed0fe2099c6557df2aa88ad57ec1

version: ae00102be506ed0fe2099c6557df2aa88ad57ec1

version: e0bfcaa7fcb2b1e1558f5f06769

33c1db807a729

version: e0bfcaa7fcb2b1e1558f5f0676933c1db807a7

29

version: e0bfcaa7fcb2b1e1558f5f0676933c1db807a7

29

version: e0bfcaa7fcb2b1e1558f5f0676933c1db807a7

29

version: e0bfcaa7fcb2b1e1558f5f0676933c1db807

a729

version: e0bfcaa7fcb2b1e1558f5f06769

33c1db807a729

version: mklml_lnx_2018.0.1.20171227

IP 172.18.0.2 172.18.0.2 172.18.0.2 172.18.0.2 172.17.0.2 172.17.0.2 172.17.0.2 172.17.0.2 172.17.0.2 172.17.0.2 172.17.0.2 172.17.0.4

Kernel version3.10.0-

693.11.6.el7.x86_64

3.10.0-693.11.6.el7.x86_6

4

3.10.0-693.11.6.el7.x86_6

4

3.10.0-693.11.6.el7.x86_6

4 4.4.0-109-generic 3.10.0-

693.11.6.el7.x86_64

3.10.0-693.11.6.el7.x86_6

4 4.4.0-109-generic 4.4.0-109-generic

3.10.0-693.11.6.el7.x86_

64

3.10.0-693.11.6.el7.x86_64

3.10.0-693.11.6.el7.x86_64

Config 31

Page 61: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

Configuration Details May8th 2018 Benchmark Segment AI/MLBenchmark type Training/InferenceBenchmark Metric Images/Sec or Time to train in secondsFramework CaffeTopology# of Nodes 1Platform PurleySockets 2S

Processor Xeon Platinum, 205W, 28 core, 2.5 GHz

BIOS SE5C620.86B.01.00.0470.040720170855Enabled Cores 28PlatformSlots 12Total Memory 192GB (96 GB per socket)Memory Configuration 6 slots/16 GB/2666 Mt/s DDR4 RDIMMsMemory Comments Micron (18ASF2G72PDZ-2G6D1)SSD INTEL SSDSC2KW48 480 GBOS RHEL Server Release 7.2 (Maipo), Linux kernel 3.10.0-327.el7.x86_64OS\Kernel CommentsOther ConfigurationsHT ONTurbo ONComputer Type ServerTopology Version, BATCHSIZE best (resnet -- 50, gnet_v3 -- 224, ssd -- 224)Dataset, version ImageNet / DummyData layer

Performance command Inference measured with “caffe time --forward_only -phase TEST” command, training measured with “caffe train” command.

Data setup DummyData layer (generating dummy images on the fly)Compiler Intel C++ compiler ver. 17.0.5 20171101

Performance Measurement Knobs Environment variables: KMP_AFFINITY='granularity=fine, compact,1,0‘, OMP_NUM_THREADS=28, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance

Memory knobs Caffe run with “numactl -l“.

Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.

Config 41

caffebranch: origin/masterversion: a3d5b022fe026e9092fc7abc7654b1162ab9940d

MKLDNNversion: 464c268e544bae26f9b85a2acb9122c766a4c396

Page 62: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems
Page 63: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

67

Case Study: Time-series Pattern DetectionLeading U.S. Market Exchange

Result

10x reductionIn data storage and search complexity costs, with more accurate matches than non-deep learning approach

Client: Leading U.S. market exchange

Challenge: Identify known patterns or anomalies in market trading data in order to predict investment activity, such as fraudulent activity or spikes in trading volume and whether any action is required.

Solution: Using Intel® Nervana™ Cloud and Neon framework. Built a recurrent neural network (RNN)-based model, utilizing encoders and decoders, in order to ingest public order book data and automatically learn informative patterns of activity in the historical data. Time series analysis enables new use cases for fraud detection, anomaly detection, and other future applications

Page 64: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

• Client• Multinational oil and gas company

• Challenge• The customer operates a number of

offshore oil rigs, and uses submersible vehicles to take video footage to ensure their infrastructure is healthy and safe.

• Since reviews of this footage are time consuming and prone to errors, a more efficient solution for detecting potential problems is needed.

• Solution• Built models to detect and classify bolts

according to level of corrosion. • Advantages

• Video footage can be condensed to 10% of its original length by filtering out unimportant frames and highlight potential problem areas, enabling inspectors to perform their jobs more efficiently.

Public

Level of Corrosion

Low High

Page 65: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems

69

Case study: Enterprise Analytics

SERPRO* Result

$1 billionStreamlined collection of US $1 billion in revenue by designing new APIs for car and license plate image recognition.

Client: SERPRO, Brazil’s largest government-owned IT services corporation, providing technology services to multiple public sector agencies.

Challenge: Across Brazil, 35,000 traffic enforcement cameras document 45 million violations every year, generating US $1 billion in revenue. Fully automating the complex, labor-intensive process for issuing tickets by integrating image recognition via AI could reduce costs and processing time.

Solution: Used deep learning techniques to optimize SERPRO’s code. With Brazilian student-partners, developed new algorithms, training and inference tests using Google TensorFlow* on Dell EMC PowerEdge R740*, running on Intel® Xeon® Scalable processor-based platforms.

*Other names and brands may be claimed as the property of others.

Page 66: Steve Thorne & Ananth Sankaranarayanan… · Steve Thorne & Ananth Sankaranarayanan. November 15 2018. Advancing AI Performance with Intel® Xeon® Scalable Systems