march 16, 2010cs152, spring 2010 cs 152 computer architecture and engineering lecture 15 - vliw...

42
March 16, 2010 CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 15 - VLIW Machines and Statically Scheduled ILP Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste http://inst.eecs.berkeley.edu/~cs152

Post on 19-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

March 16, 2010 CS152, Spring 2010

CS 152 Computer Architecture and

Engineering

Lecture 15 - VLIW Machines and

Statically Scheduled ILP

Krste AsanovicElectrical Engineering and Computer Sciences

University of California at Berkeley

http://www.eecs.berkeley.edu/~krstehttp://inst.eecs.berkeley.edu/~cs152

March 16, 2010 CS152, Spring 2010 2

Last time in Lecture 14• BTB allows prediction very early in pipeline

– Also handles jump register, although return address stack handles subroutine returns better

• Unified physical register file machines remove data values from ROB

– All values only read and written during execution– Only register tags held in ROB– Allocate resources (ROB slot, destination physical register, memory

reorder queue location) during decode– Issue window can be separated from ROB and made smaller than

ROB (allocate in decode, free after instruction completes)– Free resources on commit

• Speculative store buffer holds store values before commit to allow load-store forwarding

• Can execute later loads past earlier stores when addresses known, or predicted no dependence

March 16, 2010 CS152, Spring 2010 3

FetchDecode & Rename

Reorder BufferPC

BranchPrediction

Update predictors

Commit

Datapath: Branch Predictionand Speculative Execution

BranchResolution

BranchUnit

ALU

Reg. File

MEMStore Buffer

D$

Execute

kill

kill

killkill

March 16, 2010 CS152, Spring 2010 4

Speculative Store Buffer

• On store execute:– mark entry valid and speculative, and save data and tag of instruction.

• On store commit: – clear speculative bit and eventually move data to cache

• On store abort:– clear valid bit

Data

Load Address

Tags

Store Commit Path

Speculative Store Buffer

L1 Data Cache

Load Data

Tag DataSVTag DataSVTag DataSVTag DataSVTag DataSVTag DataSV

March 16, 2010 CS152, Spring 2010 5

Speculative Store Buffer

• If data in both store buffer and cache, which should we use?

• If same address in store buffer twice, which should we use?

Data

Load Address

Tags

Store Commit Path

Speculative Store Buffer

L1 Data Cache

Load Data

Tag DataSVTag DataSVTag DataSVTag DataSVTag DataSVTag DataSV

March 16, 2010 CS152, Spring 2010 6

Instruction Flow in Unified Physical Register File Pipeline

• Fetch– Get instruction bits from current guess at PC, place in fetch buffer– Update PC using sequential address or branch predictor (BTB)

• Decode/Rename– Take instruction from fetch buffer– Allocate resources to execute instruction:

» Destination physical register, if instruction writes a register» Entry in reorder buffer to provide in-order commit» Entry in issue window to wait for execution» Entry in memory buffer, if load or store

– Decode will stall if resources not available– Rename source and destination registers– Check source registers for readiness– Insert instruction into issue window+reorder buffer+memory buffer

March 16, 2010 CS152, Spring 2010 7

Memory Instructions• Split store instruction into two pieces during decode:

– Address calculation, store-address– Data movement, store-data

• Allocate space in program order in memory buffers during decode

• Store instructions:– Store-address calculates address and places in store buffer– Store-data copies store value into store buffer– Store-address and store-data execute independently out of issue window– Stores only commit to data cache at commit point

• Load instructions:– Load address calculation executes from window– Load with completed effective address searches memory buffer– Load instruction may have to wait in memory buffer for earlier store ops

to resolve

March 16, 2010 CS152, Spring 2010 8

Issue Stage

• Writebacks from completion phase “wakeup” some instructions by causing their source operands to become ready in issue window

– In more speculative machines, might wake up waiting loads in memory buffer

• Need to “select” some instructions for issue– Arbiter picks a subset of ready instructions for execution– Example policies: random, lower-first, oldest-first, critical-first

• Instructions read out from issue window and sent to execution

March 16, 2010 CS152, Spring 2010 9

Execute Stage

• Read operands from physical register file and/or bypass network from other functional units

• Execute on functional unit• Write result value to physical register file (or store

buffer if store)• Produce exception status, write to reorder buffer• Free slot in instruction window

March 16, 2010 CS152, Spring 2010 10

Commit Stage

• Read completed instructions in-order from reorder buffer– (may need to wait for next oldest instruction to complete)

• If exception raised– flush pipeline, jump to exception handler

• Otherwise, release resources:– Free physical register used by last writer to same

architectural register– Free reorder buffer slot– Free memory reorder buffer slot

March 16, 2010 CS152, Spring 2010 11

Superscalar Control Logic Scaling

• Each issued instruction must somehow check against W*L instructions, i.e., growth in hardware W*(W*L)

• For in-order machines, L is related to pipeline latencies and check is done during issue (interlocks or scoreboard)

• For out-of-order machines, L also includes time spent in instruction buffers (instruction window or ROB), and check is done by broadcasting tags to waiting instructions at write back (completion)

• As W increases, larger instruction window is needed to find enough parallelism to keep machine busy => greater L

=> Out-of-order control logic grows faster than W2 (~W3)

Lifetime L

Issue Group

Previously Issued

Instructions

Issue Width W

March 16, 2010 CS152, Spring 2010 12

Out-of-Order Control Complexity:MIPS R10000

Control Logic

[ SGI/MIPS Technologies Inc., 1995 ]

March 16, 2010 CS152, Spring 2010 13

Check instruction dependencies

Superscalar processor

Sequential ISA Bottleneck

a = foo(b);

for (i=0, i<

Sequential source code

Superscalar compiler

Find independent operations

Schedule operations

Sequential machine code

Schedule execution

March 16, 2010 CS152, Spring 2010 14

VLIW: Very Long Instruction Word

• Multiple operations packed into one instruction• Each operation slot is for a fixed function• Constant operation latencies are specified• Architecture requires guarantee of:

– Parallelism within an instruction => no cross-operation RAW check– No data use before data ready => no data interlocks

Two Integer Units,Single Cycle Latency

Two Load/Store Units,Three Cycle Latency Two Floating-Point Units,

Four Cycle Latency

Int Op 2 Mem Op 1 Mem Op 2 FP Op 1 FP Op 2Int Op 1

March 16, 2010 CS152, Spring 2010 15

VLIW Compiler Responsibilities

• Schedules to maximize parallel execution

• Guarantees intra-instruction parallelism

• Schedules to avoid data hazards (no interlocks)– Typically separates operations with explicit NOPs

March 16, 2010 CS152, Spring 2010 16

Early VLIW Machines

• FPS AP120B (1976)– scientific attached array processor– first commercial wide instruction machine– hand-coded vector math libraries using software pipelining and

loop unrolling

• Multiflow Trace (1987)– commercialization of ideas from Fisher’s Yale group including

“trace scheduling”– available in configurations with 7, 14, or 28 operations/instruction– 28 operations packed into a 1024-bit instruction word

• Cydrome Cydra-5 (1987)– 7 operations encoded in 256-bit instruction word– rotating register file

March 16, 2010 CS152, Spring 2010 17

Loop Execution

for (i=0; i<N; i++)

B[i] = A[i] + C;Int1 Int 2 M1 M2 FP+ FPx

loop:

How many FP ops/cycle?

ld add r1

fadd

sd add r2 bne

loop: ld f1, 0(r1)

add r1, 8

fadd f2, f0, f1

sd f2, 0(r2)

add r2, 8

bne r1, r3, loop

Compile

Schedule

March 16, 2010 CS152, Spring 2010 18

Loop Unrollingfor (i=0; i<N; i++)

B[i] = A[i] + C;

for (i=0; i<N; i+=4)

{

B[i] = A[i] + C;

B[i+1] = A[i+1] + C;

B[i+2] = A[i+2] + C;

B[i+3] = A[i+3] + C;

}

Unroll inner loop to perform 4 iterations at once

Need to handle values of N that are not multiples of unrolling factor with final cleanup loop

March 16, 2010 CS152, Spring 2010 19

Scheduling Loop Unrolled Code

loop: ld f1, 0(r1)

ld f2, 8(r1)

ld f3, 16(r1) ld f4, 24(r1)

add r1, 32

fadd f5, f0, f1

fadd f6, f0, f2

fadd f7, f0, f3

fadd f8, f0, f4

sd f5, 0(r2)

sd f6, 8(r2)

sd f7, 16(r2)

sd f8, 24(r2)

add r2, 32

bne r1, r3, loop

Schedule

Int1 Int 2 M1 M2 FP+ FPx

loop:

Unroll 4 ways

ld f1ld f2ld f3ld f4add r1 fadd f5

fadd f6fadd f7fadd f8

sd f5sd f6sd f7sd f8add r2 bne

How many FLOPS/cycle?

March 16, 2010 CS152, Spring 2010 20

Software Pipelining

loop: ld f1, 0(r1)

ld f2, 8(r1)

ld f3, 16(r1) ld f4, 24(r1)

add r1, 32

fadd f5, f0, f1

fadd f6, f0, f2

fadd f7, f0, f3

fadd f8, f0, f4

sd f5, 0(r2)

sd f6, 8(r2)

sd f7, 16(r2)

add r2, 32

sd f8, -8(r2)

bne r1, r3, loop

Int1 Int 2 M1 M2 FP+ FPxUnroll 4 ways firstld f1ld f2ld f3ld f4

fadd f5fadd f6fadd f7fadd f8

sd f5sd f6sd f7sd f8

add r1

add r2bne

ld f1ld f2ld f3ld f4

fadd f5fadd f6fadd f7fadd f8

sd f5sd f6sd f7sd f8

add r1

add r2bne

ld f1ld f2ld f3ld f4

fadd f5fadd f6fadd f7fadd f8

sd f5

add r1

loop:iterate

prolog

epilog

How many FLOPS/cycle?

March 16, 2010 CS152, Spring 2010 21

Software Pipelining vs. Loop Unrolling

time

performance

time

performance

Loop Unrolled

Software Pipelined

Startup overhead

Wind-down overhead

Loop Iteration

Loop Iteration

Software pipelining pays startup/wind-down costs only once per loop, not once per iteration

March 16, 2010 CS152, Spring 2010 22

CS152 Administrivia

• Quiz 3, Tuesday March 30 (first class back after spring break)

– All material on complex pipelining (L12-L14, plus review at start of L15), PS3, Lab 3

March 16, 2010 CS152, Spring 2010 23

What if there are no loops?

• Branches limit basic block size in control-flow intensive irregular code

• Difficult to find ILP in individual basic blocks

Basic block

March 16, 2010 CS152, Spring 2010 24

Trace Scheduling [ Fisher,Ellis]

• Pick string of basic blocks, a trace, that represents most frequent branch path

• Use profiling feedback or compiler heuristics to find common branch paths

• Schedule whole “trace” at once• Add fixup code to cope with branches

jumping out of trace

March 16, 2010 CS152, Spring 2010 25

Problems with “Classic” VLIW

• Object-code compatibility– have to recompile all code for every machine, even for two machines in

same generation

• Object code size– instruction padding wastes instruction memory/cache– loop unrolling/software pipelining replicates code

• Scheduling variable latency memory operations– caches and/or memory bank conflicts impose statically unpredictable

variability

• Knowing branch probabilities– Profiling requires an significant extra step in build process

• Scheduling for statically unpredictable branches– optimal schedule varies with branch path

March 16, 2010 CS152, Spring 2010 26

VLIW Instruction Encoding

• Schemes to reduce effect of unused fields– Compressed format in memory, expand on I-cache refill

» used in Multiflow Trace» introduces instruction addressing challenge

– Mark parallel groups» used in TMS320C6x DSPs, Intel IA-64

– Provide a single-op VLIW instruction» Cydra-5 UniOp instructions

Group 1 Group 2 Group 3

March 16, 2010 CS152, Spring 2010 27

Rotating Register Files

Problems: Scheduled loops require lots of registers, Lots of duplicated code in prolog, epilog

Solution: Allocate new set of registers for each loop iteration

March 16, 2010 CS152, Spring 2010 28

Rotating Register File

P0P1P2P3P4P5P6P7

RRB=3

+R1

Rotating Register Base (RRB) register points to base of current register set. Value added on to logical register specifier to give physical register number. Usually, split into rotating and non-rotating registers.

March 16, 2010 CS152, Spring 2010 29

Rotating Register File(Previous Loop Example)

bloopsd f9, ()fadd f5, f4, ...ld f1, ()

Three cycle load latency encoded as difference of 3

in register specifier number (f4 - f1 = 3)

Four cycle fadd latency encoded as difference of 4

in register specifier number (f9 – f5 = 4)

bloopsd P17, ()fadd P13, P12,ld P9, () RRB=8

bloopsd P16, ()fadd P12, P11,ld P8, () RRB=7

bloopsd P15, ()fadd P11, P10,ld P7, () RRB=6

bloopsd P14, ()fadd P10, P9,ld P6, () RRB=5

bloopsd P13, ()fadd P9, P8,ld P5, () RRB=4

bloopsd P12, ()fadd P8, P7,ld P4, () RRB=3

bloopsd P11, ()fadd P7, P6,ld P3, () RRB=2

bloopsd P10, ()fadd P6, P5,ld P2, () RRB=1

March 16, 2010 CS152, Spring 2010 30

Cydra-5:Memory Latency Register (MLR)

Problem: Loads have variable latencySolution: Let software choose desired memory latency

• Compiler schedules code for maximum load-use distance

• Software sets MLR to latency that matches code schedule

• Hardware ensures that loads take exactly MLR cycles to return values into processor pipeline– Hardware buffers loads that return early– Hardware stalls processor if loads return late

March 16, 2010 CS152, Spring 2010 31

Intel EPIC IA-64

• EPIC is the style of architecture (cf. CISC, RISC)– Explicitly Parallel Instruction Computing

• IA-64 is Intel’s chosen ISA (cf. x86, MIPS)– IA-64 = Intel Architecture 64-bit– An object-code compatible VLIW

• Itanium (aka Merced) is first implementation (cf. 8086)– First customer shipment expected 1997 (actually 2001)– McKinley, second implementation shipped in 2002– Recent version, Tukwila 2008, quad-cores, 65nm (not shipping until

2010?)

March 16, 2010 CS152, Spring 2010 32

Quad Core Itanium “Tukwila” [Intel 2008]

• 4 cores• 6MB $/core, 24MB $ total• ~2.0 GHz• 698mm2 in 65nm CMOS!!!!!• 170W• Over 2 billion transistor

March 16, 2010 CS152, Spring 2010 33

IA-64 Instruction Format

• Template bits describe grouping of these instructions with others in adjacent bundles

• Each group contains instructions that can execute in parallel

Instruction 2 Instruction 1 Instruction 0 Template

128-bit instruction bundle

group i group i+1 group i+2group i-1

bundle j bundle j+1 bundle j+2bundle j-1

March 16, 2010 CS152, Spring 2010 34

IA-64 Registers

• 128 General Purpose 64-bit Integer Registers• 128 General Purpose 64/80-bit Floating Point

Registers• 64 1-bit Predicate Registers

• GPRs rotate to reduce code size for software pipelined loops

March 16, 2010 CS152, Spring 2010 35

IA-64 Predicated ExecutionProblem: Mispredicted branches limit ILPSolution: Eliminate hard to predict branches with predicated execution

– Almost all IA-64 instructions can be executed conditionally under predicate– Instruction becomes NOP if predicate register false

Inst 1Inst 2br a==b, b2

Inst 3Inst 4br b3

Inst 5Inst 6

Inst 7Inst 8

b0:

b1:

b2:

b3:

if

else

then

Four basic blocks

Inst 1Inst 2p1,p2 <- cmp(a==b)(p1) Inst 3 || (p2) Inst 5(p1) Inst 4 || (p2) Inst 6Inst 7Inst 8

Predication

One basic block

Mahlke et al, ISCA95: On average >50% branches removed

March 16, 2010 CS152, Spring 2010 36

Predicate Software Pipeline Stages

Single VLIW Instruction

Software pipeline stages turned on by rotating predicate registers Much denser encoding of loops

(p1) bloop(p3) st r4(p2) add r3(p1) ld r1

Dynamic Execution

(p1) ld r1

(p1) ld r1

(p1) ld r1

(p1) ld r1

(p1) ld r1

(p2) add r3

(p2) add r3

(p2) add r3

(p2) add r3

(p2) add r3

(p3) st r4

(p3) st r4

(p3) st r4

(p3) st r4

(p3) st r4

(p1) bloop

(p1) bloop

(p1) bloop

(p1) bloop

(p1) bloop

(p1) bloop

(p1) bloop

March 16, 2010 CS152, Spring 2010 37

Fully Bypassed Datapath

ASrcIRIR IR

PCA

B

Y

R

MD1 MD2

addrinst

InstMemory

0x4Add

IR ALU

ImmExt

rd1

GPRs

rs1rs2

wswd rd2

we

wdata

addr

wdata

rdataData Memory

we

31

nop

stall

D

E M W

PC for JAL, ...

BSrc

Where does predication fit in?

March 16, 2010 CS152, Spring 2010 38

IA-64 Speculative Execution

Problem: Branches restrict compiler code motion

Inst 1Inst 2br a==b, b2

Load r1Use r1Inst 3

Can’t move load above branch because might cause spurious

exception

Load.s r1Inst 1Inst 2br a==b, b2

Chk.s r1Use r1Inst 3

Speculative load never causes

exception, but sets “poison” bit on

destination register

Check for exception in original home block

jumps to fixup code if exception detected

Particularly useful for scheduling long latency loads early

Solution: Speculative operations that don’t cause exceptions

March 16, 2010 CS152, Spring 2010 39

IA-64 Data Speculation

Problem: Possible memory hazards limit code scheduling

Requires associative hardware in address check table

Inst 1Inst 2Store

Load r1Use r1Inst 3

Can’t move load above store because store might be to same

address

Load.a r1Inst 1Inst 2Store

Load.cUse r1Inst 3

Data speculative load adds address to

address check table

Store invalidates any matching loads in

address check table

Check if load invalid (or missing), jump to fixup

code if so

Solution: Hardware to check pointer hazards

March 16, 2010 CS152, Spring 2010 40

Clustered VLIW

• Divide machine into clusters of local register files and local functional units

• Lower bandwidth/higher latency interconnect between clusters

• Software responsible for mapping computations to minimize communication overhead

• Common in commercial embedded VLIW processors, e.g., TI C6x DSPs, HP Lx processor

• (Same idea used in some superscalar processors, e.g., Alpha 21264)

Cluster Interconnect

Local Regfile

Local Regfile

Memory Interconnect

Cache/Memory Banks

Cluster

March 16, 2010 CS152, Spring 2010 41

Limits of Static Scheduling

• Unpredictable branches• Variable memory latency (unpredictable

cache misses)• Code size explosion• Compiler complexity

March 16, 2010 CS152, Spring 2010 42

Acknowledgements

• These slides contain material developed and copyright by:

– Arvind (MIT)– Krste Asanovic (MIT/UCB)– Joel Emer (Intel/MIT)– James Hoe (CMU)– John Kubiatowicz (UCB)– David Patterson (UCB)

• MIT material derived from course 6.823• UCB material derived from course CS252