how hpc hardware and software are evolving towards exascalehipacc.ucsc.edu/lecture...

27
How HPC Hardware and Software are Evolving Towards Exascale Kathy Yelick Associate Laboratory Director and NERSC Director Lawrence Berkeley National Laboratory EECS Professor, UC Berkeley

Upload: others

Post on 13-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

How HPC Hardware and Software are Evolving Towards Exascale

Kathy Yelick Associate Laboratory Director and NERSC Director

Lawrence Berkeley National Laboratory

EECS Professor, UC Berkeley

Page 2: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

NERSC Overview NERSC represents science needs • Over 4000 users, 500 projects, 500

code instances • Over 1,600 publications in 2009 •  Time is used by university

researchers (65%), DOE Labs (25%) and others

2

Petaflop Hopper system, late 2010 •  High application performance •  Nodes: 2 12-core AMD processors •  Low latency Gemini interconnect

Page 3: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

How and When to Move Users

107

106

105

104

103

102

10

2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020

Franklin (N5) 19 TF Sustained 101 TF Peak

Franklin (N5) +QC 36 TF Sustained 352 TF Peak

Hopper (N6) >1 PF Peak

10 PF Peak

100 PF Peak

1 EF Peak

Pea

k Te

raflo

p/s

Want to avoid two paradigm disruptions on road to Exa-scale

Flat MPI MPI+OpenMP

MPI+X??

MPI+CUDA?

Page 4: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Exascale is about Energy Efficient Computing

goal

usual scaling

2005 2010 2015 2020

4

At $1M per MW, energy costs are substantial •  1 petaflop in 2010 will use 3 MW •  1 exaflop in 2018 at 200 MW with “usual” scaling •  1 exaflop in 2018 at 20 MW is target

Page 5: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Challenges to Exascale

1)  System power is the primary constraint 2)  Concurrency (1000x today) 3)  Memory bandwidth and capacity are not keeping pace 4)  Processor architecture is an open question 5)  Programming model heroic compilers will not hide this 6)  Algorithms need to minimize data movement, not flops 7)  I/O bandwidth unlikely to keep pace with machine speed 8)  Reliability and resiliency will be critical at this scale 9)  Bisection bandwidth limited by cost and energy

Unlike the last 20 years most of these (1-7) are equally important across scales, e.g., 100 10-PF machines

Performance Growth

Page 6: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Anticipating and Influencing the Future

Hardware Design

6 6

Page 7: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Moore’s Law Continues, but Only with Added Concurrency

CM-5

Red Storm

Past: 1000x performance increase was 40x clock speed and 25x concurrency

0

1

10

100

1000

10000

100000

1000000

10000000

1970 1975 1980 1985 1990 1995 2000 2005 2010

Transistors (Thousands) Frequency (MHz) Power (W) Cores

Future: All in added concurrency, include new on-chip concurrency

Page 8: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Where does the Energy (and Time) Go?

1

10

100

1000

10000

Pico

Joul

es

now

2018

Intranode/MPI Communication

On-chip / CMP communication

Intranode/SMP Communication

8

Counting flops is irrelevant, only data movement matters

Page 9: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

9 12/16/10

Case for Lightweight Core and Heterogeneity

0  

50  

100  

150  

200  

250  

1   2   4   8   16   32   64   128   256  

Asymmetric  Speedu

p  

Size  of  Fat  core  in  Thin  Core  units  

F=0.999  

F=0.99  

F=0.975  

F=0.9  

F=0.5  

(256 cores) (193 cores)

(1 core)

F is fraction of time in parallel; 1-F is serial

Chip with area for 256 thin cores

Intel QC Nehalem

Tensil-ica

Overall Gain

Power (W)

100 .1 103

Area (mm2)

240 2 102

DP flops

50 4 .1

Overall 104

Lightweight (thin) cores improve energy efficiency

Ubiquitous programming model of today (MPI) will not work within a processor chip

256 small cores 1 fat core

Page 10: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Memory is Not Keeping Pace

Technology trends against a constant or increasing memory per core •  Memory density is doubling every three years; processor logic is every two • Memory costs are dropping gradually compared to logic costs

Source: David Turek, IBM

Cost of Computation vs. Memory

Question: Can you double concurrency without doubling memory?

Source: IBM

Page 11: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Are GPUs the Future? (Includes Cell and GPU)

Gainestown Barcelona Victoria Falls

Cell Blade GTX280

Cache-based

GTX280-Host

Local store-based

K. Datta, M. Murphy, V. Volkov, S. Williams , J. Carter, L. Oliker. D. Patterson, J. Shalf, K. Yelick, BDK11 book

Pow

er E

ffici

ency

Per

form

ance

Page 12: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

The Roofline Performance Model

peak DP

mul / add imbalance

w/out SIMD

w/out ILP

0.5

1.0

1/8 actual flop:byte ratio

atta

inab

le G

flop/

s

2.0

4.0

8.0

16.0

32.0

64.0

128.0

256.0

1/4 1/2 1 2 4 8 16

Generic Machine   The flat room is

determined by arithmetic peak and instruction mix

  The sloped part of the roof is determined by peak DRAM bandwidth (STREAM)

  X-axis is the computational intensity of your computation

Page 13: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

512

256

128

64

32

16

8

4

2

1024

1/16 1 2 4 8 16 32 1/8 1/4

1/2 1/32

double-precision peak

double-precision peak

Arithmetic Intensity

Atta

inab

le G

flop/

s

2.2x Measured BW

& Roofline

Relative Performance Expectations

1.7x 7-point Stencil

6.7x peak

Page 14: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

512

256

128

64

32

16

8

4

2

1024

1/16 1 2 4 8 16 32 1/8 1/4

1/2 1/32

512

256

128

64

32

16

8

4

2

1024

1/16 1 2 4 8 16 32 1/8 1/4

1/2 1/32

single-precision peak

double-precision peak

single-precision peak

double-precision peak

Xeon X5550 (Nehalem) NVIDIA C2050 (Fermi)

DP add-only

DP add-only

Relative Performance Across Kernels

Page 15: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

What Heterogeneity Means to Me

•  Case for heterogeneity –  Many small cores are needed for energy efficiency and

power density; could have their own PC or use a wide SIMD –  Need one fat core (at least) for running the OS

•  Local store, explicitly managed memory hierarchy –  More efficient (get only what you need) and simpler to

implement in hardware •  Co-Processor interface between CPU and

Accelerator –  Market: GPUs are separate chips for specific domains –  Control: Why are the minority CPUs in charge? –  Communication: The bus is a significant bottleneck. –  Do we really have to do this? Isn’t parallel programming

hard enough

Page 16: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Open Problems in Software

•  Goal: performance through parallelism •  Locality is equally important •  Heroic compilers unlikely solution: •  Need better programming models that:

– Abstract machine variations – Provide for control over what is important

•  Data movement (“communication”) dominates running time and power

16

Page 17: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

What’s Wrong with Flat MPI?

•  We can run 1 MPI process per core –  This works now for Quad-Core on Franklin

•  How long will it continue working? –  4 - 8 cores? Probably. 128 - 1024 cores? Probably not.

•  What is the problem? –  Latency: some copying required by semantics – Memory utilization: partitioning data for separate address

space requires some replication •  How big is your per core subgrid? At 10x10x10, over 1/2 of the

points are surface points, probably replicated – Memory bandwidth: extra state means extra bandwidth – Weak scaling will not save us -- not enough memory per core –  Heterogeneity: MPI per CUDA thread-block?

•  This means a “new” model for most NERSC users

17

Page 18: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Develop Best Practices in Multicore Programming

0

2

4

6

8

10

12

0

200

400

600

800

1000

1200

1400

1600

1800

2000

1 2 3 6 12

Mem

ory

per n

ode

(GB

)

Tim

e (s

ec)

cores per MPI process

PARATEC (768 cores on Jaguar)

Time

Memory

= OpenMP thread parallelism

Conclusions so far: •  Mixed OpenMP/MPI

saves significant memory

•  Running time impact varies with application

•  1 MPI process per socket is often good

Run on Hopper next: •  12 vs 6 cores per socket •  Gemini vs. Seastar

18

Page 19: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Shared off-chip DRAM

Partitioned Global Address Space Languages

Global address space: thread may directly read/write remote data

Partitioned: data is designated as local or global G

loba

l add

ress

spa

ce"

x: 1 y:

l: l: l:

g: m: g:

x: 5 y:

p0" p1" pn"•  No less scalable than message passing •  Permits sharing, unlike message passing •  One-sided communication: never say “receive” •  Affinity control for shared and distributed memory

Private on-chip

Shared on-chip

Page 20: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Autotuners To Handle Machine Complexity

•  OSKI: Optimized Sparse Kernel Interface •  Optimized for: size, machine, and matrix structure •  Functional portability from C (except for Cell/GPUs)   Performance portability

from install time search and model evaluation at runtime

  Later tuning, less opaque interface

Matrix Vector Mul specialized

to n,m

Triangular Solve

specialized to n,m

Matrix Vector Mul specialized

to n,m, structure

See theses from Im, Vuduc, Williams, and Jain

OSKI Library

OSKI Autotuner: code generator +search

Performance on Median Matrix of Suite

Page 21: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Communication-Avoiding Algorithms

Consider Sparse Iterative Methods •  Nearest neighbor communication on a mesh •  Dominated by time to read matrix (edges) from DRAM •  And (small) communication and global

synchronization events at each step Can we lower data movement costs? •  Take k steps “at once” with one matrix read from DRAM and one communication phase –  Parallel implementation

O(log p) messages vs. O(k log p) –  Serial implementation

O(1) moves of data moves vs. O(k)

Joint work with Jim Demmel, Mark Hoemman, Marghoob Mohiyuddin

21

Page 22: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Communication-Avoiding Algorithms

•  Sparse Iterative (Krylov Subpace) Methods –  Nearest neighbor communication on a mesh –  Dominated by time to read matrix (edges) from DRAM –  And (small) communication and global

synchronization events at each step •  Can we lower data movement costs?

–  Take k steps with one matrix read from DRAM and one communication phase

•  Serial: O(1) moves of data moves vs. O(k) •  Parallel: O(log p) messages vs. O(k log p)

•  Can we make communication provably optimal? –  Communication both to DRAM and between cores –  Minimize independent accesses (‘latency’) –  Minimize data volume (‘bandwidth’)

Joint work with Jim Demmel, Mark Hoemman, Marghoob Mohiyuddin

Page 23: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Bigger Kernel (Akx) Runs at Faster Speed than Simpler (Ax)

Speedups on Intel Clovertown (8 core)

Jim Demmel, Mark Hoemmen, Marghoob Mohiyuddin, Kathy Yelick

Page 24: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

“Monomial”  basis  [Ax,…,Akx]      fails  to  converge  

 A  different  polynomial  basis  does  converge  

Page 25: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Communication-Avoiding Krylov Method (GMRES)

Performance on 8 core Clovertown

Page 26: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

Communication-Avoiding Dense Linear Algebra

•  Well known why BLAS3 beats BLAS1/2: Minimizes communication = data movement –  Attains lower bound Ω (n3 / cache_size1/2 ) words moved in

sequential case; parallel case analogous •  Same lower bound applies to all linear algebra

–  BLAS, LU, Cholesky, QR, eig, svd, compositions… –  Sequential or parallel –  Dense or sparse (n3 ⇒ #flops in lower bound)

•  Conventional algs (Sca/LAPACK) do much more •  We have new algorithms that meet lower bounds

–  Good speed ups in prototypes (including on cloud) –  Lots more algorithms, implementations to develop

–  See talk in the MIT Math Dept, December 13th 26

Page 27: How HPC Hardware and Software are Evolving Towards Exascalehipacc.ucsc.edu/Lecture Slides/Yelick-Exascale-Astro-SD.pdf · Challenges to Exascale 1) System power is the primary constraint

General Lessons

•  Early intervention with hardware designs •  Optimize for what is important: energy data movement •  Anticipating and changing the future

–  Influence hardware designs –  Use languages that reflect abstract machine –  Write code generators / autotuners –  Redesign algorithms to avoid communication

•  These problems are essential for computing performance in general

27