fifer - people.csail.mit.edu

49
Practical Acceleration of Irregular Applications on Reconfigurable Architectures Quan M. Nguyen and Daniel Sanchez Massachusetts Institute of Technology MICRO-54 Live session: Session 9A (Graph Processing) October 21, 2021 at 2:15 PM EDT Fifer:

Upload: others

Post on 05-Jun-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Fifer - people.csail.mit.edu

Practical Acceleration of Irregular Applications on Reconfigurable Architectures

Quan M. Nguyen and Daniel SanchezMassachusetts Institute of Technology

MICRO-54Live session: Session 9A (Graph Processing)

October 21, 2021 at 2:15 PM EDT

Fifer:

Page 2: Fifer - people.csail.mit.edu

Irregular applications are difficult to accelerate• Irregular applications’ unpredictable

data reuse & control flow are hard to accelerate on today’s architectures• CPUs: poor latency tolerance, high

instruction execution overheads• Dedicated accelerators: not flexible

• Reconfigurable spatial architecturesoffer circuit-level control of distributed computation, but...• still cannot extract enough parallelism• only benefit regularmemory/compute

patterns 2

SwitchFunc

SwitchI/O

SwitchFunc

SwitchI/O

I/O I/O

Func

Switch

Switch

I/O

I/O

I/O

...Func

...

Switch Switch SwitchI/O I/O

... ... ...

...

... ......

...Func Func

...

Func

Func

Func

...

...

mapcompile

Source

Page 3: Fifer - people.csail.mit.edu

Fifer enables accelerating irregular applications• Insight: accelerate irregular applications

by exploiting pipeline parallelism• Create dynamic temporal pipelines:

time-multiplexing stages of a pipeline on reconfigurable fabric• Fifer’s speedups: over gmean 17x over

OOO multicore and 2.8x over reconfigurable spatial architectures without time-multiplexing

3BFS CC PRD Radii SpMM Silo

0.01.02.03.04.0

Spee

dup

Multicore OOO Static pipeline Fifer

Page 4: Fifer - people.csail.mit.edu

Agenda

Intro➜ Background ➜ Fifer ➜ Evaluation

4

Page 5: Fifer - people.csail.mit.edu

(CSR format)

Irregular applications are difficult to accelerate

5

Current fringe ...

Edge list offsets (I)

Neighbors (J)

...

...

Distances

Next fringe

...

def bfs(src):

for v in current fringe:start, end = offsets[v], offsets[v+1]

for ngh in neighbors[start:end]:

dist = distances[ngh]

if dist is not set:set distance; add to next fringe

• Characterized by unpredictable reuse:• Caches, scratchpads capture some locality• But, irregular applications generally have

poor locality, large data structures

Page 6: Fifer - people.csail.mit.edu

General-purpose cores handle irregular applications poorly• Modern cores have expensive latency

tolerance mechanisms:• Out-of-order execution• Multithreading

• General-purpose cores are temporal architectures: they change operations (instructions) over time• Unit of work is small;

high fetch/decode overheads

6

to L1 cache

ROB

Ld/St Bufs

PC

OOO Core

Phys. RegFile

Functional U

nits…

RenameIssue

Dispatch

Front end

Page 7: Fifer - people.csail.mit.edu

Spatial architectures improve computational intensity...• Map operations spatially to

array of functional units (FUs)• Switches set to pass operands

between FUs• Input/output ports feed

values to/from fabric• FUs operate at machine word

width: coarse-grain reconfigurable array (CGRA)

7

SwitchFunc

SwitchI/O

SwitchFunc

SwitchI/O

I/O I/O

Func

Switch

Switch

I/O

I/O

I/O

...Func

...

Switch Switch SwitchI/O I/O... ... ...

...

... ......

...Func Func

...

Func

Func

Func

...

...

Page 8: Fifer - people.csail.mit.edu

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

Last-level Cache

High-bandwidth Memory

Cont

rol C

ore

Anatomy of a CGRA-based system• Many processing elements (PEs)

with fabric and private cache• Data flow within a CGRA:

rigid pipelines• Inter-PE communication:

decoupled• Control core for system

interactions, setup/teardown

8Baseline system

Switch

Switch

FuncSwitch

Switch

Switch

Switch

...

...

...

L1 Cache

Inter-PEoutput

channels

Inter-PEinput

channels

Processing element (PE)

Page 9: Fifer - people.csail.mit.edu

... but not flexible enough for irregularity• Only transform inner loop• Some approaches highly

specialized to application• Irregular applications have

unpredictable latencies and variable computational intensity as they execute• Current spatial architectures

would either stall or suffer poor utilization (many PEs idle)

9

>>

>>

>>

>>

XOR

& &

&

&

&

&

DI0

DI0

DM0

DM0

EXITPT2

DI1

DI1

DI2

DI2

DO1DO0

DO1DO0EXIT(a) (b) (c)

(d)

BB1 BB1

BB2 BB2

BB3 BB3

BB4 BB4

BB5BB5

BB5

BB5

entry entry

PT1

PT0

Figure 5. Compiling for DySER

to DySER . A control-flow graph and path-tree is shown inFigure 5a,b.

Slicing the Path-Tree A path-tree itself cannot bemapped to a DySER block because it includes memoryinstructions, while DySER provides only computation re-sources. The path-tree is sliced into a load back-slice – asequence of instructions in the path-tree that affect any loadin the path-tree – and a computation slice. The computa-tion slice of a path-tree is all instructions except the loadback-slice. φ-nodes in the static single assignment form(SSA) [10] of the computation slice represent control-flow.Simply mapping these φ-nodes to a functional unit allowscontrol-flow in DySER. An example computation slice isshown in Figure 5c which corresponds to the code-snippetfrom Table 1. Others have also used slicing for improvinghardware efficiency [40, 21].

Load/Store Ordering: In our execution model, the mainprocessor executes loads as part of load back-slice. If theDySER block sends values directly to the memory system,load/store ordering may be violated, which we solve withthe dyser store instruction. It executes as part of theload back-slice and is inserted immediately after the nearestload in the original code. The dyser store instructionspecifies a DySER output port as an input operand, cor-responding to the value/address for that store. It logicallyfetches the address and/or value from DySER output portsand sends the value and address to the load-store queue orwrite-buffer. Since dyser store executes as part of the

load back-slice in the main processor, the memory depen-dency between loads and stores can be resolved using theprocessor’s memory disambiguation mechanisms. This de-coupling is a key simplification that allows DySER to lever-age existing memory disambiguation mechanisms and re-main beneficial in many application domains.

DySER Mapping: Mapping of the computation slice toDySER occurs in three steps. First, the compiler sets up thecommunication between the load back-slice and computa-tion slice by inserting dyser send, dyser load, anddyser store instructions in the load back-slice . Thedestination targets for these instructions are named DySERports. Second, the compiler maps each instruction in thecomputation slice to a node in the DySER datapath. Third,it configures the switches to create physical paths corre-sponding to data-flow edges. These two steps that are doneby the scheduler are implemented with a greedy heuristicthat considers the nodes in topological sort order. Sincethe DySER network is circuit-switched, we must map data-flow edges to hardware paths making the scheduling prob-lem fundamentally different from tiled architectures likeTRIPS [5], WaveScalar [36], and RAW [37]. Figures 5dshows DySER mapping for the code-snippet.

Implementation We have developed extensions to theGCC toolchain for evaluation of the DySER architecture.Our framework operates on the SPARC backend and doespath-profiling and DySER mapping. The total configurationinformation is 327 bytes for one configuration of a 64-FUblock, with 10-bits per FU and 19-bits per switch.

We conclude with two observations on path-trees: 1)The number of path-trees in many applications is smallenough to create specialized units for each path-tree. 2) Ap-plications remain in a few path-trees for many invocationsbefore entering a different path tree.

5 EvaluationBenchmarks: We evaluate applications from the SPECCPU2006 [33], Parboil [30], and the PARSEC [4] bench-mark suite to cover traditional workloads, GPU workloads,and emerging workloads respectively1. We consider severalbenchmark suites to demonstrate the architecture’s perfor-mance across a diverse suite.

Modeling and Simulation: We modified the MultifacetGEMS [27] OPAL cycle-accurate simulator to supportDySER datapaths. Functional unit delays, path delays (onecycle per hop), and the input/output interface are modeledfor DySER blocks. We include a 128-entry 2-bit predic-tor with 4-bits history to predict the next path-tree to hideconfiguration delays. We model a configuration delay of 64

1Some of the applications in the PARSEC and SPEC suites do not workwith our compiler passes yet and those are not reported (Fortran-code,library-issues, and input-file endianness problems).

509

DySER [HPCA ‘11]

P1: Read SRC Property

Sequential Vertex Read

P3: Read Edges for given SRC

[Optional]P4 : Read DST

Property

Edge Read

P5: Process Edge

P7: Read Temp

DST PropertyP8: Reduce

P9: Write Temp

DST Property

P6: Control AtomicUpdate

Random Vertex Read

ProcessingPhase

ApplyPhase

A1: ReadVertex

Property

A2: Read Temp Vertex

Property

A4: Write Vertex

PropertyA3: Apply

Sequential Vertex Read

SPMVertex Read

CustomComputation

SequentialVertex Write

Notify the vertex id of a completed update

Hardware Unit

No Memory AccessSequential Memory AccessRandom Memory Access Random/Sequential Memory Access

Scratchpad Memory AccessHardware Unit

P5*: Process Edge

Custom Computation

P7*: Read Temp

SRC PropertyP8*: Reduce

SPMVertex Read

P9*: Write Temp

SRC Property

CustomComputation

P6*: Control AtomicUpdate

SPMVertex Write

AtomicUpdate

Fig. 6: Optimized Graphicionado pipeline. Note that this pipeline includes optimizations (Section IV-B, Section IV-D) that are only applicable for some of thealgorithms.

phase where stage A5 in Fig. 5 is removed and the resultingoptimized pipeline is shown in Fig. 6.

C. PrefetchingWith the optimizations described above, most of the off-

chip memory accesses in Graphicionado are now sequentialaccesses. Since the addresses of these off-chip memory accessesare not dependent on any other portions of the pipeline, we caneasily perform next-line prefetches and get the data into theaccelerator before they are needed. We extend the sequentialvertex read and edge read modules (stage P1 and P3 in Fig. 6)to prefetch and buffer up to N cachelines (N = 4 is used forour evaluation) and configure them to continue fetching thenext cacheline from memory as long as the buffer is not full.With this optimization, almost all of the sequential memoryaccess latencies can be hidden and Graphicionado can operateat a high throughput.

D. Optimization for symmetric graphsMost graph processing frameworks (including GraphMat)

work naturally with directed graphs. In such frameworks, anundirected input graph is effectively treated as a symmetricgraph. That is, for each edge (srcid, dstid, weight) thereexists an edge (dstid, srcid, weight). While this approachworks, it incurs unnecessary memory accesses for completeedge access algorithms such as Collaborative Filtering. For ex-ample, in order to process an edge (u, v, weight), the sourcevertex property VProperty[u], the destination vertex propertyVProperty[v], and the edge data e = (u, v, weight) areread and VProperty[v] updated at the end of the processingphase. The exact same set of data will be read again laterwhen processing the symmetric edge (v, u, weight) andVProperty[u] is updated this time. To reduce bandwidthconsumption, Graphicionado extends its pipeline so that it canupdate both the source and the destination vertex propertieswhen processing an edge from a symmetric graph withouthaving to read the same data twice. This is reflected in theoptimized pipeline shown in Fig. 6 stages P5–P9 where thisportion of the pipeline is replicated.

E. Large vertex property supportThe current Graphicionado pipeline is designed to support

processing up to 32 bytes of vertex property data per cycle.When a large vertex property is desired, for example, Collabo-rative Filtering implements vertex properties of 128 bytes each,

the large vertex property is simply treated as a packet involvingmultiple flits where each flit contains 32 bytes. For most ofthe pipeline stages in Graphicionado, each flit is processedwithout waiting for an entire packet worth of data to arrive(in a manner similar to wormhole switching). For the customcomputation stages, we wait for the entirety of the packet datato arrive using a simple buffering scheme before computationsare performed (as in store-and-forward switching) to maintainfunctionality. With proper buffering (4 flits in the case for CF),the throughput of the pipeline is not significantly impacted.

V. GRAPHICIONADO PARALLELIZATION

With optimizations described in Section IV, Graphicionadocan process graph analytics workloads with reasonable effi-ciency. However, thus far it is a single pipeline with theoreticalmaximum throughput limited to one edge per cycle in theProcessing phase and one vertex per cycle in the Applyphase. This section discusses further improving Graphicionadopipeline throughput, by exploiting the inherent parallelism ingraph workloads.

A. Extension to multiple streamsA naïve way to provide parallelism in the Graphicionado

pipeline is to replicate the whole pipeline and let each of thereplicated pipelines, or pipeline stream, to process a portionof the active vertices. In fact, this is the most commonapproach in software graph processing frameworks whenincreasing parallelism. Unfortunately this approach introducessome significant drawbacks in the hardware pipeline. Whenmultiple replicated streams try to read and write the sameon-chip scratchpad location, these operations are serializedand performance degrades. To avoid these access conflicts,Graphicionado divides the Processing phase into two portions,a source-oriented portion and a destination-oriented portion,corresponding to stages P1–P3 and stages P4–P9 in Fig. 5.The two portions are then replicated separately and connectedusing a crossbar switch as shown in Fig. 7. Each parallelstream in the source-oriented portion of the pipeline isresponsible for executing a subset of the source vertices andeach parallel stream in the destination-oriented portion of thepipeline is responsible for executing a subset of the destinationvertices. The crossbar switch routes edge data by matching thedestination vertex id of the edge. To maximize the throughputof the switch, standard techniques such as virtual outputqueues [47] are implemented.

Graphicionado [MICRO ‘16]

>>

>>

>>

>>

XOR

& &

&

&

&

&

DI0

DI0

DM0

DM0

EXITPT2

DI1

DI1

DI2

DI2

DO1DO0

DO1DO0EXIT(a) (b) (c)

(d)

BB1 BB1

BB2 BB2

BB3 BB3

BB4 BB4

BB5BB5

BB5

BB5

entry entry

PT1

PT0

Figure 5. Compiling for DySER

to DySER . A control-flow graph and path-tree is shown inFigure 5a,b.

Slicing the Path-Tree A path-tree itself cannot bemapped to a DySER block because it includes memoryinstructions, while DySER provides only computation re-sources. The path-tree is sliced into a load back-slice – asequence of instructions in the path-tree that affect any loadin the path-tree – and a computation slice. The computa-tion slice of a path-tree is all instructions except the loadback-slice. φ-nodes in the static single assignment form(SSA) [10] of the computation slice represent control-flow.Simply mapping these φ-nodes to a functional unit allowscontrol-flow in DySER. An example computation slice isshown in Figure 5c which corresponds to the code-snippetfrom Table 1. Others have also used slicing for improvinghardware efficiency [40, 21].

Load/Store Ordering: In our execution model, the mainprocessor executes loads as part of load back-slice. If theDySER block sends values directly to the memory system,load/store ordering may be violated, which we solve withthe dyser store instruction. It executes as part of theload back-slice and is inserted immediately after the nearestload in the original code. The dyser store instructionspecifies a DySER output port as an input operand, cor-responding to the value/address for that store. It logicallyfetches the address and/or value from DySER output portsand sends the value and address to the load-store queue orwrite-buffer. Since dyser store executes as part of the

load back-slice in the main processor, the memory depen-dency between loads and stores can be resolved using theprocessor’s memory disambiguation mechanisms. This de-coupling is a key simplification that allows DySER to lever-age existing memory disambiguation mechanisms and re-main beneficial in many application domains.

DySER Mapping: Mapping of the computation slice toDySER occurs in three steps. First, the compiler sets up thecommunication between the load back-slice and computa-tion slice by inserting dyser send, dyser load, anddyser store instructions in the load back-slice . Thedestination targets for these instructions are named DySERports. Second, the compiler maps each instruction in thecomputation slice to a node in the DySER datapath. Third,it configures the switches to create physical paths corre-sponding to data-flow edges. These two steps that are doneby the scheduler are implemented with a greedy heuristicthat considers the nodes in topological sort order. Sincethe DySER network is circuit-switched, we must map data-flow edges to hardware paths making the scheduling prob-lem fundamentally different from tiled architectures likeTRIPS [5], WaveScalar [36], and RAW [37]. Figures 5dshows DySER mapping for the code-snippet.

Implementation We have developed extensions to theGCC toolchain for evaluation of the DySER architecture.Our framework operates on the SPARC backend and doespath-profiling and DySER mapping. The total configurationinformation is 327 bytes for one configuration of a 64-FUblock, with 10-bits per FU and 19-bits per switch.

We conclude with two observations on path-trees: 1)The number of path-trees in many applications is smallenough to create specialized units for each path-tree. 2) Ap-plications remain in a few path-trees for many invocationsbefore entering a different path tree.

5 EvaluationBenchmarks: We evaluate applications from the SPECCPU2006 [33], Parboil [30], and the PARSEC [4] bench-mark suite to cover traditional workloads, GPU workloads,and emerging workloads respectively1. We consider severalbenchmark suites to demonstrate the architecture’s perfor-mance across a diverse suite.

Modeling and Simulation: We modified the MultifacetGEMS [27] OPAL cycle-accurate simulator to supportDySER datapaths. Functional unit delays, path delays (onecycle per hop), and the input/output interface are modeledfor DySER blocks. We include a 128-entry 2-bit predic-tor with 4-bits history to predict the next path-tree to hideconfiguration delays. We model a configuration delay of 64

1Some of the applications in the PARSEC and SPEC suites do not workwith our compiler passes yet and those are not reported (Fortran-code,library-issues, and input-file endianness problems).

509

Page 10: Fifer - people.csail.mit.edu

Time-multiplexing on CGRAs:Triggered Instructions [ISCA’13]• Triggered Instructions PEs can

choose among many instructions• Limited number of instructions (16)• Complex scheduling to keep PEs

active• Fifer’s approach:

coarse-grain reconfiguration oncoarse-grain sets of operations

10

Page 11: Fifer - people.csail.mit.edu

Irregular applications can be decoupled and mapped to spatial architectures

11

Process current fringe

Enumerate neighbors

Visit neighbors

Update data,next fringe

def bfs(src):

for v in current fringe:

start, end = offsets[v], offsets[v+1]

for ngh in neighbors[start:end]:

dist = distances[ngh]

if dist is not set:

set distance; add to next fringe

Page 12: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

12

Process current fringe

Enumerate neighbors

Visit neighbors

Update data,next fringe

Page 13: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

12

Process current fringe

Enumerate neighbors Visit neighbors Update data,

next fringe

Page 14: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

13

Processcurrent fringe

Enumerateneighbors

Fetchdistances

Update data,next fringe

Page 15: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

13

ProcessingElement

FunctionalUnit

Inter-PEChannels

Static pipeline

Page 16: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

13

Queues

Page 17: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

13

Queues

Page 18: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

13

Dynamic temporalpipeline

Queues

Page 19: Fifer - people.csail.mit.edu

Insight: Create dynamic temporal pipelines on reconfigurable spatial architectures

13

Fifer

Dynamic temporalpipeline

Queues

Page 20: Fifer - people.csail.mit.edu

Agenda

Intro➜ Background➜ Fifer ➜ Evaluation

14

Page 21: Fifer - people.csail.mit.edu

Fifer is flexible yet performant

• Coarse-grain reconfigurable array (CGRA) increases compute density over general-purpose cores• Buffering between PEs

and within PEs provides latency tolerance• Time-multiplexed

fabric keeps throughput and utilization high

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache Switch

Switch

FuncSwitch

Switch

Switch

Switch

...

...

...

L1 Cache

DRMsQueue memory

Inter-PEoutput

channels

ToCache

Scheduler

Inter-PEinput

channels

Reconfigurablespatial architecture

Processing element (PE)

15

FiferIntra-PEinput

channels

Page 22: Fifer - people.csail.mit.edu

Mapping applications to Fifer*.c

AnnotatedPer-Stage

Source

Dataflowgraph

analysis &extraction

*.ll

LLVMIR

C / C++Compiler

Dataflow graph (DFG) *.bin

FabricConfig.

Bitstream

Bitstreamgeneration

Serial code:for e in range(start, end):

ngh = neighbors[e]

mov %r_neighbors, ...;deq %r_e, $q_start;deq %r_end, $q_end;

loop:lea %r_addr, (%r_neighbors,%r_e,2);ld %r_ngh, (%r_addr);enq $q_ngh, %r_ngh;addi %r_e, %r_e, 1;blt %r_e, %r_end, loop

done:...

Pseudo-assembly: Mapping:

start

end

neighbors

e

end < ?

LEA

1

LD

CacheControl

address ofneighbors[e]

ngh

16

Page 23: Fifer - people.csail.mit.edu

Fifer needs reconfigurations to be...

Rare• Amortize reconfiguration cost

over hundreds of cycles• Round-robin scheduling

switches too often• Fifer’s scheduler in each PE

keeps a stage scheduled until queues become full or empty• Prioritize stages with most

work in input queues.

Fast• Quickly tolerate variations in the

amount of work between stages• Prior techniques (e.g., scan

chains) reconfigure in ~ microseconds; we need cycles• Fifer’s double-buffered

configuration cells make reconfiguration fast at low hardware cost

17

Page 24: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Page 25: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Time

Config.Slot A

Config.Slot B

Page 26: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Time

Config.Slot A

Config.Slot B

Page 27: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Time

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 28: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Time

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 29: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Time

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 30: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Time

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 31: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 32: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 33: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Orangesteady-state

execution

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 34: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Orangesteady-state

execution

Config.Slot A

Config.Slot B

Scheduler decides to switch

Page 35: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Orangesteady-state

execution

Config.Slot A

Config.Slot B

Scheduler decides to switch

Drain in-flightoperations

Page 36: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Orangesteady-state

execution

Activate Config A

Config.Slot A

Config.Slot B

Scheduler decides to switch

Drain in-flightoperations

Page 37: Fifer - people.csail.mit.edu

Fast reconfiguration withdouble-buffering

18

A B const

ALU

Conf

ig. C

ell

Configurationsignals

Connectionsfrom switches

To switches

Config. Slot A Config. Slot BDouble-buffered

Configuration CellCurrentConfig.

Config. datafrom L1 cache

Config. load &select signalsFunctional Unit

Greensteady-state

execution

Load & buffernew config.

Activate Config B

Time

Orangesteady-state

execution

Activate Config A

Bluesteady-state

execution

Config.Slot A

Config.Slot B

Scheduler decides to switch

Drain in-flightoperations

Page 38: Fifer - people.csail.mit.edu

Exploiting pipeline and data parallelism together

19

Process current fringe Visit neighbors

Update data,next fringe

Enumerate neighbors

Page 39: Fifer - people.csail.mit.edu

Exploiting pipeline and data parallelism together

19

Enum.neighs.

Proc. cur.fringe

Fetch dists.

Upd. data, next fringePE 0

Page 40: Fifer - people.csail.mit.edu

Exploiting pipeline and data parallelism together

19

Enum.neighs.

Proc. cur.fringe

Fetch dists.

Upd. data, next fringePE 0

Enum.neighs.

Proc. cur.fringe

Fetch dists.PE

1 Upd. data, next fringe

Enum.neighs.

Proc. cur.fringe

Fetch dists.PE

2 Upd. data, next fringe

Enum.neighs.

Proc. cur.fringe

Fetch dists.PE

3 Upd. data, next fringe

• Replicate pipelines using many PEs• Careful partitioning avoids

synchronization through shared memory

• Increase PE utilization & parallelism by performing multiple units of work in one stage (SIMD-style)• e.g., enumerate multiple

neighbors per cycle

Page 41: Fifer - people.csail.mit.edu

See paper for more

• Accelerating memory accesses with decoupled reference machines• Evaluating other decoupling strategies• Handling control flow• Energy, area estimates for Fifer’s major components

20

Page 42: Fifer - people.csail.mit.edu

Agenda

Intro➜ Background➜ Fifer➜ Evaluation

21

Page 43: Fifer - people.csail.mit.edu

Evaluation

• Baseline system: 16-PE system with static stage mappings (no intra-PE queues)• Other comparison systems: serial

and 4-core OOO cores• A Fifer PE is 1.34 mm2 at 45 nm;

core count set for roughly iso-area comparison• Benchmarks evaluated:

• Graph analytics (BFS, Connected Components, PageRank-Delta, Radii)

• Sparse linear algebra (SpMM)• Databases (Silo)

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

Last-level CacheHigh-bandwidth Memory

Cont

rol C

ore

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

PEL1 Cache

Last-level CacheHigh-bandwidth Memory

Cont

rol C

ore

Predominant flow of data

Static pipeline(baseline) (our system)

22

Fifer

Page 44: Fifer - people.csail.mit.edu

Other application pipelines

23

Querystage

Lookupstage

Traverseinternal node

Processleaf node

values

keysSilo (database)

Streamcols of B

Merge-intersect Accumulate

Streamrows of A

SpMM(linear algebra)

Graph analytics Process current fringe

Visit neighbors

Update data,next fringe

Enumerate neighbors

Page 45: Fifer - people.csail.mit.edu

BFS CC PRD Radii SpMM Silo0.01.02.03.04.0

Spee

dup

Multicore OOO Static pipeline Fifer

Fifer outperforms the baseline

• Fifer achieves consistent speedups over the baseline

24

Page 46: Fifer - people.csail.mit.edu

B)6 CC P5D 5Ddii6S00 6ilR0.000.250.500.751.00

1Rr

PDl

ized

FyF

les

I

(20)

D

(21)

6

)

I

(8)

D

(9)

6)

I

(10)

D

(9)

6 ) I

(6)

D

(6)

6 ) I

(6)

D

(4)

6 ) I

(19)

D

(18)

6 )

Idle5eFRQIigurDtiRQ4ueue Iull/ePSty6tDllsIssued

Fifer effectively hides latency

• Fifer reduces waiting on queues, even when reconfiguring

I: Serial D: Data-parallel S: Static pipeline F: Fifer

25

Page 47: Fifer - people.csail.mit.edu

Application BFS CC PRD Radii SpMM Silo Mean

Average residence time

(cycles)140 279 927 564 30 1490 448

Average reconfiguration period (cycles)

12.5 13.9 20.4 27.7 12.6 60.1 19.7

Fifer achieves infrequent and fast reconfiguration• Stages remain resident for hundreds of cycles• SpMM has very short residence time, necessitating double-buffering

26

Page 48: Fifer - people.csail.mit.edu

Fifer helps accelerate irregular applications on reconfigurable architectures• Transform irregular applications into dynamic temporal pipelines

and efficiently execute them on reconfigurable spatial architectures• Fast reconfiguration is enabled by Fifer’s scheduler and double-

buffered configuration cells• Fifer makes executing irregular applications on reconfigurable

spatial architectures practical

27

Page 49: Fifer - people.csail.mit.edu

Thank you!

Practical Acceleration of Irregular Applications on Reconfigurable Architectures

Quan M. Nguyen and Daniel [email protected] and [email protected]

MICRO-54Live session: Session 9A (Graph Processing)

October 21, 2021 at 2:15 PM EDT

This presentation and recording belong to the authors.No distribution is allowed without the authors' permission.

Fifer: