exascale computing without templates · scope limitations template metaprogramming lacks state...

19
Exascale Computing Without Templates Karl Rupp , Richard Mills, Barry Smith, Matthew Knepley, Jed Brown Argonne National Laboratory DOE COE Performance Portability Meeting, Denver August 22-24, 2017

Upload: others

Post on 11-May-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

Exascale Computing Without Templates

Karl Rupp, Richard Mills, Barry Smith, Matthew Knepley, Jed Brown

Argonne National Laboratory

DOE COE Performance Portability Meeting, DenverAugust 22-24, 2017

Page 2: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

2

Providing ContextProviding Context

Exascale Computing without Threads˚A White Paper Submitted to the

DOE High Performance Computing Operational Review (HPCOR)on Scientific Software Architecture for Portability and Performance

August 2015

Matthew G. Knepley1, Jed Brown2, Barry Smith2, Karl Rupp3, and Mark Adams4

1Rice University, 2Argonne National Laboratory, 3TU Wien, 4Lawrence Berkeley National Laboratory

[email protected], [jedbrown,bsmith]@mcs.anl.gov, [email protected], [email protected]

Abstract

We provide technical details on why we feel the use of threads does not offer any fundamental performanceadvantage over using processes for high-performance computing and hence why we plan to extend PETSc toexascale (on emerging architectures) using node-aware MPI techniques, including neighborhood collectives andportable shared memory within a node, instead of threads.

Introduction. With the recent shift toward many cores on a single node with shared-memory domains, a hybridapproach has became in vogue: threads (almost always OpenMP) within each shared-memory domain (or withineach processor socket in order to address NUMA issues) and MPI across nodes. We believe that portions of theHPC community have adopted the point of view that somehow threads are “necessary” in order to utilize suchsystems, (1) without fully understanding the alternatives, including MPI 3 functionality, (2) underestimating thedifficulty of utilizing threads efficiently, and (3) without appreciating the similarities of threads and processes.This short paper, due to space constraints, focuses exclusively on issue (3) since we feel it has gotten virtually noattention.

A common misconception is that threads are lightweight, whereas processes are heavyweight, and that interthreadcommunication is fast, whereas inter-process communication is slow (and might even involve a dreaded system call).Although this was true at certain times and on certain systems, we claim that in the world relevant to HPC todayit is incorrect. To understand whether threads are necessary to HPC, one must understand the exact technicalimportance of the differences (and similarities) between threads and processes and whether these differences haveany bearing on the utility of threads versus processes.

To prevent confusion, we will use the term stream of execution to mean the values in the registers, includingthe program counter, and a stack. Modern systems have multiple cores each of which may have multiple setsof (independent) registers. Multiple cores mean that several streams of execution can run simultaneously (oneper core), and multiple sets of registers mean that a single core can switch between two streams of executionvery rapidly, for example in one clock cycle, by simply switching which set of registers is used (so-called hardwarethreads or hyperthreading). This hyperthreading is intended to hide memory latency. If one stream of execution isblocked on a memory load, the core can switch to the other stream, which, if it is not also blocked on a load, cancommence computations immediately. Each stream of execution runs in a virtual address space. All addresses usedby the stream are translated to physical addresses by the TLB (translation look-aside buffer) automatically by thehardware before the value at that address is loaded from the caches or memory into a register. In the past, someL1 caches used virtual addresses, which meant that switching between virtual address spaces required the entirecache to be invalidated (this is one reason the myth of heavyweight processes developed); but for modern systemsthe L1 caches are hardware address aware and changing virtual address spaces does not require flushing the cache.Another reason switching processes in the past was expensive was that some systems would flush the TLB whenswitching virtual address spaces. Modern systems allow multiple virtual address spaces to share different parts ofthe TLB.

With Linux, the operating system of choice for most HPC systems, the only major difference between threadsand processes is that threads (of the same process) share the same virtual address space while different processeshave different virtual address spaces (though they can share physical memory). Scheduling of threads and processesis handled in the same way, hence the cost of switching between threads or between processes is the same. Thus,the only fundamental difference is that threads share all memory by default, whereas processes share no memory bydefault. The one true difference between threads and processes is that threads can utilize the entire TLB for virtualaddress translation, whereas processes each get a portion of the TLB. With modern large TLBs we argue that this

˚Note that this discussion relates to threads on multicore systems and does not address threads on GPU systems.

1

https://www.orau.gov/hpcor2015/whitepapers/Exascale Computing without Threads-Barry Smith.pdf

PETSc Developers Care About Recent Developments

After careful evaluation: Favor MPI 3.0 (and later) over threads

Find the best long-term solutions for our users

Consider best solutions for large-scale applications, not just toy-apps

Page 3: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

3

Providing ContextProviding Context

http://mooseframework.org/static/media/wiki/images/229/b61cdbc1e8be71dae37adc31a688d209/moose-arch.png

Page 4: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

4

Providing ContextProviding Context

Our Attempts in C++ Library Development

Sieve: Several years of C++ mesh management attempts in PETSc

ViennaGrid 2.x: Heavily templated C++ mesh management library

ViennaCL: Dense and sparse linear algebra and solvers for multi- and many-core architectures

Aftermath

Sieve: Replaced by DMPlex (written in C)

ViennaGrid: Version 3.0 provides C-ABI

ViennaCL: Rewrite in C likely

Sequential build times for the ViennaCL test suite

101

102

103

104

2010 2011 2012 2013 2014 2015 2016

Tim

e (

sec)

Year

1.0.

0

1.1.

0

1.2.

0 1.3.

0

1.4.

0 1.5.

01.

6.0

1.7.

0

Page 5: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

4

Providing ContextProviding Context

Our Attempts in C++ Library Development

Sieve: Several years of C++ mesh management attempts in PETSc

ViennaGrid 2.x: Heavily templated C++ mesh management library

ViennaCL: Dense and sparse linear algebra and solvers for multi- and many-core architectures

Aftermath

Sieve: Replaced by DMPlex (written in C)

ViennaGrid: Version 3.0 provides C-ABI

ViennaCL: Rewrite in C likely

Sequential build times for the ViennaCL test suite

101

102

103

104

2010 2011 2012 2013 2014 2015 2016

Tim

e (

se

c)

Year

1.0.

0

1.1.

0

1.2.

0 1.3.

0

1.4.

0 1.5.

01.

6.0

1.7.

0

Page 6: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

5

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Static Dispatch

Architecture-specific information only available at run time

“Change code and recompile” not acceptable advice

Dealing with Compilation Errors

Type names pollute compiler output

Replicated across interfaces

CRTP may result in type length explosion

Default arguments become visible

Type Lengthstd::vector<int> 38std::vector<std::vector<int> > 109std::vector<std::vector<std::vector<int> > > 251std::vector<std::vector<std::vector<std::vector<int> > > > 539

https://xkcd.com/303/

Page 7: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

5

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Static Dispatch

Architecture-specific information only available at run time

“Change code and recompile” not acceptable advice

Dealing with Compilation Errors

Type names pollute compiler output

Replicated across interfaces

CRTP may result in type length explosion

Default arguments become visible

Type Lengthstd::vector<int> 38std::vector<std::vector<int> > 109std::vector<std::vector<std::vector<int> > > 251std::vector<std::vector<std::vector<std::vector<int> > > > 539

https://xkcd.com/303/

Page 8: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

6

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

https://tgceec.tumblr.com/

Page 9: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

7

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Scope Limitations

Template metaprogramming lacks state

Optimizations across multiple code lines difficult or impossible

Example

Consider vector updates in pipelined CG method:

xi ← xi−1 + αpi−1

ri ← ri−1 − αyi

pi ← ri + βpi−1

Reuse of pi−1 and ri−1 easy with for-loops, but hard with expression templates

Page 10: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

7

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Scope Limitations

Template metaprogramming lacks state

Optimizations across multiple code lines difficult or impossible

Example

Consider vector updates in pipelined CG method:

xi ← xi−1 + αpi−1

ri ← ri−1 − αyi

pi ← ri + βpi−1

Reuse of pi−1 and ri−1 easy with for-loops, but hard with expression templates

Page 11: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

8

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Complicates Debugging

Stack traces get longer names and deeper

Setting good breakpoints may become harder

Lack of a Stable ABI

Object files from different compilers generally incompatible

Name mangling makes use outside C++ land almost impossible

High Entry Bar

Number of potential contributors inversely proportional to code sophistication

Domain scientists have limited resources for C++ templates

Page 12: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

8

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Complicates Debugging

Stack traces get longer names and deeper

Setting good breakpoints may become harder

Lack of a Stable ABI

Object files from different compilers generally incompatible

Name mangling makes use outside C++ land almost impossible

High Entry Bar

Number of potential contributors inversely proportional to code sophistication

Domain scientists have limited resources for C++ templates

Page 13: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

8

Disadvantages of C++ TemplatesDisadvantages of C++ Templates

Complicates Debugging

Stack traces get longer names and deeper

Setting good breakpoints may become harder

Lack of a Stable ABI

Object files from different compilers generally incompatible

Name mangling makes use outside C++ land almost impossible

High Entry Bar

Number of potential contributors inversely proportional to code sophistication

Domain scientists have limited resources for C++ templates

Page 14: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

9

A Path ForwardA Path Forward

Manage Complexity

Good interface design

Refactor code when needed

Hand-optimize small kernels only (cf. BLIS methodology)

0:6 F. Van Zee and T. Smith

4th loop around micro-kernel

5th loop around micro-kernel

3rd loop around micro-kernel

mR

mR

1

+=

+=

+=

+=

+=

+=

nC nC

kC

kC

mC mC

1

nR

kC

nR

Pack Ai → Ai ~

Pack Bp → Bp ~

nR

A Bj Cj

Ap

Ai

Bp Cj

Ai ~ Bp

~

Bp ~ Ci

Ci

kC

L3 cache L2 cache L1 cache registers

main memory

1st loop around micro-kernel

2nd loop around micro-kernel

micro-kernel

Fig. 1. An illustration of the algorithm for computing high-performance matrix multiplication, as expressedwithin the BLIS framework [Van Zee and van de Geijn 2015].

ACM Transactions on Mathematical Software, Vol. 0, No. 0, Article 0, Publication date: 2016.

[F. Van Zee, T. Smith, ACM TOMS 2017]

Development Implications

Adopt professional software development practices

Develop, maintain, and evolve different datastructures ...

... and code paths

Use clear and easy-to-understand datastructures

Fallacy: “Writing” an application only once in its final form

Page 15: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

9

A Path ForwardA Path Forward

Manage Complexity

Good interface design

Refactor code when needed

Hand-optimize small kernels only (cf. BLIS methodology)

0:6 F. Van Zee and T. Smith

4th loop around micro-kernel

5th loop around micro-kernel

3rd loop around micro-kernel

mR

mR

1

+=

+=

+=

+=

+=

+=

nC nC

kC

kC

mC mC

1

nR

kC

nR

Pack Ai → Ai ~

Pack Bp → Bp ~

nR

A Bj Cj

Ap

Ai

Bp Cj

Ai ~ Bp

~

Bp ~ Ci

Ci

kC

L3 cache L2 cache L1 cache registers

main memory

1st loop around micro-kernel

2nd loop around micro-kernel

micro-kernel

Fig. 1. An illustration of the algorithm for computing high-performance matrix multiplication, as expressedwithin the BLIS framework [Van Zee and van de Geijn 2015].

ACM Transactions on Mathematical Software, Vol. 0, No. 0, Article 0, Publication date: 2016.

[F. Van Zee, T. Smith, ACM TOMS 2017]

Development Implications

Adopt professional software development practices

Develop, maintain, and evolve different datastructures ...

... and code paths

Use clear and easy-to-understand datastructures

Fallacy: “Writing” an application only once in its final form

Page 16: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

10

A Path ForwardA Path Forward

Spending Development Resources

Reuse existing libraries — reinventing the wheel is not productive!

Focus on domain- and application-specific aspects

Obtain expertise and resources for continuous code evolution

Required Incentives

Reward contributions to existing projects

Pair research funding with software development funding

Establish software development career tracks

Page 17: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

10

A Path ForwardA Path Forward

Spending Development Resources

Reuse existing libraries — reinventing the wheel is not productive!

Focus on domain- and application-specific aspects

Obtain expertise and resources for continuous code evolution

Required Incentives

Reward contributions to existing projects

Pair research funding with software development funding

Establish software development career tracks

Page 18: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

11

A Path ForwardA Path Forward

Is Performance Portability Just a Software Productivity Aspect?

https://www.nitrd.gov/PUBS/CSESSPWorkshopReport.pdf

Page 19: Exascale Computing Without Templates · Scope Limitations Template metaprogramming lacks state Optimizations across multiple code lines difficult or impossible Example Consider vector

12

SummarySummary

Long-Term Problems of Heavy C++ Templates Use

Template metaprogramming is a leaky abstraction

Excessive type names slow down all stages of Compile-Run-Debug-cycle

Templates operate at compile time - architecture ultimately known at run time

A Path Forward

Adopt professional software development practices

Be prepared to develop different datastructures and code paths

Write clear, readable code using simple datastructures

Evolve and refactor datastructures, kernels, and interfaces over time

(cf. software productivity discussions)