tetris: scalable and efficient neural network acceleration ... › group › mast › cgi-bin ›...

27
TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory Mingyu Gao, Jing Pu, Xuan Yang, Mark Horowitz, Christos Kozyrakis Stanford University ASPLOS – April 2017

Upload: others

Post on 07-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS: Scalable and Efficient Neural NetworkAcceleration with 3D Memory

Mingyu Gao, Jing Pu, Xuan Yang, Mark Horowitz, Christos Kozyrakis

Stanford University

ASPLOS – April 2017

Page 2: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Neural Networks (NNs)

Unprecedented accuracy for challenging applications

System perspective: compute and memory intensive

o Many efforts to accelerate with specialized hardware

2

Classification

Recognition

Control

Prediction

Optimization

Multi-cores

GPUs FPGAs ASICs Clusters

Page 3: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

“Dog”

Neural Networks (NNs)

3

* =

ifmaps ofmapsfilters

Ni

No

No

Ni

foreach b in batch Nbforeach ifmap u in Niforeach ofmap v in No// 2D convO(v,b) += I(u,b) * W(u,v) + B(v)

CONV

foreach b in batch Nbforeach neuron x in Nxforeach neuron y in Ny// Matrix multiplyO(y,b) += I(x,b) x W(x,y) + B(v)

𝑂 = 𝐼 ×𝑊

FC

Page 4: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Domain-Specific NN Accelerators

Spatial architectures of PEs

o 100x performance and energy efficiency

o Low-precision arithmetic, dynamic pruning, static compression, …

4

foreach b in batch Nbforeach ifmap u in Ni

foreach ofmap v in No// 2D convO(v,b) += I(u,b) * W(u,v) + B(v)

foreach b in batch Nbforeach neuron x in Nx

foreach neuron y in Ny// Matrix multiplyO(y,b) += I(x,b) x W(x,y) + B(v)

PE PE PE PE

PE PE PE PE

PE PE PE PE

PE PE PE PE

Glo

bal

Bu

ffe

r

ALU

Reg File

Processing Element

Mai

n M

em

ory

Page 5: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Memory Challenges for Large NNs

Large footprints and bandwidth requirements

o Many and large layers, complex neuron structures

o Efficient computing requires higher bandwidth

Limit scalability for future NNs

5

foreach b in batch Nbforeach ifmap u in Ni

foreach ofmap v in No// 2D convO(v,b) += I(u,b) * W(u,v) + B(v)

foreach b in batch Nbforeach neuron x in Nx

foreach neuron y in Ny// Matrix multiplyO(y,b) += I(x,b) x W(x,y) + B(v)

PE PE PE PE

PE PE PE PE

PE PE PE PE

PE PE PE PE

Glo

bal

Bu

ffe

r

ALU

Reg File

Processing Element

Mai

n M

em

ory

?Large on-chip buffers:

area inefficiency

Multiple DRAM channels: energy inefficiency

Page 6: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Memory Challenges for Large NNs

State-of-the-art NN accelerator with 400 PEs

o 1.5 MB SRAM buffer 70% area

o 4 LPDDR3 x32 chips 45% power in DRAM & SRAM

6

0

5

10

15

20

25

30

0

0.5

1

1.5

2

2.5

3

100 120 144 168 196 224 256 288 324 360 400

Ban

dw

idth

(GB

ps)

Po

we

r(W

)

Number of PEs

PE/reg dynamic Buf dynamic DRAM dynamic Total static Peak DRAM bandwidth

Page 7: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

3D Memory + NN Acceleration

Opportunities

o High bandwidth at low access energy

o Abundant parallelism (vaults, banks)

Key questions

o Hardware resource balance

o Software scheduling and workload partitioning

7

Micron’s Hybrid Memory Cube

Vault(Channel)

DRAM Die

Logic Die

Bank

TSVs

Page 8: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS

NN acceleration with 3D memory

o Improves performance scalability by 4.1x over 2D

o Improves energy efficiency by 1.5x over 2D

Hardware architecture

o Rebalance resources between PEs and buffers

o In-memory accumulation

Software optimizations

o Analytical dataflow scheduling for memory hierarchy

o Hybrid partitioning for parallelism across vaults

8

High performance & low energy

Alleviate bandwidth pressure

Optimize buffer use

Efficient parallel processing

Page 9: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Hardware Architecture

Page 10: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Architecture

Associate one NN engine with each vault

o PE array, local register files, and a shared global buffer

NoC + routers for accesses to remote vaults

All vaults can process NN computations in parallel

10

Vault DRAM Die

Logic Die

PE PE PE PE

PE PE PE PE

PE PE PE PE

PE PE PE PE

Glo

bal

Bu

ffe

r

Me

mo

ry C

on

tro

ller

Ro

ute

r

To local vault

To remote vault

Page 11: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Resource Balancing

Larger PE arrays with smaller SRAM buffers

o High memory bandwidth more PEs

o Low access energy + sequential pattern smaller buffers

11

0

2

4

6

0

0.6

1.2

1.8

36/467kB 48/441kB 64/408kB 80/374kB 100/332kB 120/290kB 144/240kB 168/190kB 196/133kB 224/72kB

No

rmal

ize

d R

un

tim

e

No

rmal

ize

d E

ne

rgy

# PEs / buffer capacity

PE dynamic Reg/buf dynamic DRAM dynamic Total static Runtime

196 PEs with 133 kB buffer (area 1:1)better performance and energy

Page 12: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

In-Memory Accumulation

Move simple accumulation logic close to DRAM banks

o 2x bandwidth reduction for output data

o See paper for discussion of logic placement in DRAM

12

PE array

Memory

Y += W * X

XW Y Y

PE array

Memory

ΔY = W * X

XW ΔY

+

Page 13: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Scheduling and Partitioning for TETRIS

Page 14: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Dataflow Scheduling

Critical for maximizing on-chip data reuse to save energy

14

foreach b in batch Nbforeach ifmap u in Niforeach ofmap v in No

// 2D convO(v,b) += I(u,b) * W(u,v) + B(v)

Mapping: execute 2D conv on PE array

• Regfiles and array interconnect

• Row stationary [Chen et al., ISCA’16]

Ordering: loop blocking and reordering

• Locality in global buffer

• Non-convex, exhaustive search

Page 15: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Bypass Ordering

Limited reuse opportunities with small buffers

IW bypass, OW bypass, IO bypass

o Use buffer only for one stream for maximum benefit

o Bypass buffer for the other two to sacrifice their reuse

15

ifmaps ofmaps filters

Global buffer

Reg files

OW bypass ordering

Off-chip

On-chip1. Read 1 ifmap chunk into gbuf2. Stream ofmaps and filters to regf3. Move ifmaps from gbuf to regf4. Convolve5. Jump to 2

Chunk 2

Chunk 1

Chunk 0

Page 16: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Bypass Ordering

Analytically derived

o Closed-form solution

o No need for exhaustive search

Near-optimal schedules

o With 2% from schedules derived with exhaustive search

16

min𝐴DRAM= 2 × 𝑁b𝑁o𝑆o × 𝑡i + 𝑁b𝑁i𝑆i + 𝑁o𝑁i𝑆w × 𝑡b

s.t. ൞

𝑁b𝑡b

×𝑁i𝑡i× 𝑆i ≤ 𝑆buf

1 ≤ 𝑡b ≤ 𝑁b, 1 ≤ 𝑡i ≤ 𝑁i

NNRuntime Gap

(w.r.t. optimal)Energy Gap

(w.r.t. optimal)

AlexNet 1.48 % 1.86 %

ZFNet 1.55 % 1.83 %

VGG16 0.16 % 0.20 %

VGG19 0.13 % 0.16 %

ResNet 2.91 % 0.78 %

Page 17: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

NN Partitioning

Option 1: fmap partitioning

o Divide a fmap into tiles

o Each vault processes one tile

o Minimum remote accesses

17

Layer i Layer i+1

Vault 0 Vault 1

Vault 2 Vault 3

Process NN computations in parallel in all vaults

Page 18: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

NN Partitioning

Option 2: output partitioning

o Partition all ofmaps into groups

o Each vault processes one group

o Better filter weight reuse

o Fewer total memory accesses

18

Layer i Layer i+1

Vault 0Vault 1

Vault 2

Vault 3

Process NN computations in parallel in all vaults

Page 19: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Hybrid Partitioning

Combine fmap partitioning and output partitioning

o Balance between minimizing remote accesses and total DRAM accesses

o Total energy = NoC energy + DRAM energy

Difficulties

o Design space exponential to # layers

Greedy algorithm reduces to be linear to # layers

o Complex dataflow scheduling to determine total DRAM accesses

Bypass ordering to quickly estimate total DRAM accesses

19

Page 20: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Evaluation

Page 21: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Methodology

State-of-the-art NNs

o AlexNet, ZFNet, VGG16, VGG19, ResNet

o 100—300 MB total memory footprint for each NN

o Up to 152 layers in ResNet

2D and 3D accelerators with ≥1 NN engines

o 2D engine: 16 x 16 PEs, 576 kB buffer, 1 LPDDR3 channel

• 8.5 mm2, 51.2 Gops/sec

• Bandwidth-constrained

o 3D engine: 14 x 14 PEs, 133 kB buffer, 1 HMC vault

• 3.5 mm2, 39.2 Gops/sec

• Area-constrained21

Page 22: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Single-engine Comparison

Up to 37% performance improvement with TETRIS

o Due to higher bandwidth despite smaller PE array

22

0

0.2

0.4

0.6

0.8

1

1.2

2D 3D 2D 3D 2D 3D 2D 3D 2D 3D

No

rmal

ize

d R

un

tim

e

Large NNs benefit more!

AlexNet ZFNet VGG16 VGG19 ResNet

Page 23: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

35–40% energy reduction with TETRIS

o Smaller on-chip buffer, better scheduling

0

0.2

0.4

0.6

0.8

1

1.2

2D 3D 2D 3D 2D 3D 2D 3D 2D 3D

No

rmal

ize

d E

ne

rgy

Total static

NoC dynamic

DRAM dynamic

Reg/buf dynamic

PE dynamic

Single-engine Comparison

23

From SRAM & DRAM, and static

AlexNet ZFNet VGG16 VGG19 ResNet

Page 24: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Multi-engine Comparison

4 2D engines: 34 mm2, pin constrained (4 LPDDR3 channels)

16 3D engines: 56 mm2, area constrained (16 HMC vaults)

4.1x performance gain 2x compute density

24

0

0.05

0.1

0.15

0.2

0.25

0.3

2D-4 3D-16 2D-4 3D-16 2D-4 3D-16 2D-4 3D-16 2D-4 3D-16

No

rmal

ize

d R

un

tim

e

AlexNet ZFNet VGG16 VGG19 ResNet

Page 25: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Multi-Engine Comparison

1.5x lower energy

o 1.2x from better scheduling and partitioning

4x computation only costs 2.7x power

25

0

0.2

0.4

0.6

0.8

1

1.2

2D-4 3D-16 2D-4 3D-16 2D-4 3D-16 2D-4 3D-16 2D-4 3D-16

No

rmal

ize

d E

ne

rgy

Total static

NoC dynamic

DRAM dynamic

Reg/buf dynamic

PE dynamic

AlexNet ZFNet VGG16 VGG19 ResNet

Page 26: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

TETRIS Summary

A scalable and efficient NN accelerator using 3D memoryo 4.1x performance and 1.5x energy benefits over 2D baseline

Hardware featureso PE/buffer area rebalancing

o In-memory accumulation

Software featureso Analytical dataflow scheduling

o Hybrid partitioning

Scheduling exploration toolo https://github.com/stanford-mast/nn_dataflow

26

Page 27: TETRIS: Scalable and Efficient Neural Network Acceleration ... › group › mast › cgi-bin › drupal › ...TETRIS Bypass Ordering Limited reuse opportunities with small buffers

Thanks!Questions?