The Importance of Being Local
2
More Data Than Cache
Let’s say that you have 1000 times more data than cache. Then won’t most of your data be outside the cache?
YES!
Okay, so how does cache help?
3
Improving Your Cache Hit Rate
Many scientific codes use a lot more data than can fit in cache all at once.
Therefore, you need to ensure a high cache hit rate even though you’ve got much more data than cache.
So, how can you improve your cache hit rate? Use the same solution as in Real
Estate:
(Location, Location, Location!)
Data Locality
Data locality is the principle that, if you use data in a particular memory address, then very soon you’ll use either the same address or a nearby address.
Temporal locality: if you’re using address A now, then you’ll probably soon use address A again.
Spatial locality: if you’re using address A now, then you’ll probably soon use addresses between A-k and A+k, where k is small.
Note that this principle works well for sufficiently small values of “soon.”
Cache is designed to exploit locality, which is why a cache miss causes a whole line to be loaded.
4
5
Data Locality Is Empirical: C
Data locality has been observed empirically in many, many programs.
void ordered_fill (float* array, int array_length){ int index;
for (index = 0; index < array_length; index++) { array[index] = index; }}
No Locality Example: C
In principle, you could write a program that exhibited absolutely no data locality at all:
6
void random_fill (float* array, int* random_permutation_index, int array_length){ int index;
for (index = 0; index < array_length; index++) { array[random_permutation_index[index]] = index; }}
7
Permuted vs. Ordered
In a simple array fill, locality provides a factor of 8 to 20 speedup over a randomly ordered fill on a Pentium4.
Better0
5
10
15
20
25
30
0 5 10 15 20 25 30
Array size (log2 bytes)
CP
U s
econ
ds
Random
Ordered
Exploiting Data Locality
If you know that your code is capable of operating with a decent amount of data locality, then you can get speedup by focusing your energy on improving the locality of the code’s behavior.
This will substantially increase your cache reuse.
8
9CS 491 – Parallel and Distributed Computing
10
A Sample Application Matrix-Matrix Multiply Let A, B and C be matrices of sizes nr nc, nr nk and nk nc, respectively:
ncnrnrnrnr
nc
nc
nc
aaaa
aaaa
aaaa
aaaa
,3,2,1,
,33,32,31,3
,23,22,21,2
,13,12,11,1
A
nknrnrnrnr
nk
nk
nk
bbbb
bbbb
bbbb
bbbb
,3,2,1,
,33,32,31,3
,23,22,21,2
,13,12,11,1
B
ncnknknknk
nc
nc
nc
cccc
cccc
cccc
cccc
,3,2,1,
,33,32,31,3
,23,22,21,2
,13,12,11,1
C
nk
kcnknkrcrcrcrckkrcr cbcbcbcbcba
1,,,33,,22,,11,,,,
The definition of A = B • C is
for r {1, nr}, c {1, nc}.
Matrix Multiply w/Initialization
void matrix_matrix_mult_by_init ( float** dst, float** src1, float** src2, int nr, int nc, int nq){ int r, c, q;
for (r = 0; r < nr; r++) { for (c = 0; c < nc; c++) { dst[r][c] = 0.0; for (q = 0; q < nq; q++) { dst[r][c] = dst[r][c] + src1[r][q] * src2[q][c]; } /* for q */ } /* for c */ } /* for r */}
11
Matrix Multiply Behavior
12
If the matrix is big, then each sweep of a row will clobber nearby values in cache.
Performance of Matrix Multiply
13
Matrix-Matrix Multiply
0
100
200
300
400
500
600
700
800
0 10000000 20000000 30000000 40000000 50000000 60000000
Total Problem Size in bytes (nr*nc+nr*nq+nq*nc)
CP
U s
ec
Naive
Init
Intrinsic
Better
Tiling
14
Tiling
Tile: a small rectangular subdomain of a problem domain. Sometimes called a block or a chunk.
Tiling: breaking the domain into tiles. Tiling strategy: operate on each tile to
completion, then move to the next tile. Tile size can be set at runtime,
according to what’s best for the machine that you’re running on.
15
Tiling Code: Cvoid matrix_matrix_mult_by_tiling ( float** dst, float** src1, float** src2, int nr, int nc, int nq, int rtilesize, int ctilesize, int qtilesize){ int rstart, rend, cstart, cend, qstart, qend;
for (rstart = 0; rstart < nr; rstart += rtilesize) { rend = rstart + rtilesize – 1; if (rend >= nr) rend = nr - 1; for (cstart = 0; cstart < nc; cstart += ctilesize) { cend = cstart + ctilesize – 1; if (cend >= nc) cend = nc - 1; for (qstart = 0; qstart < nq; qstart += qtilesize) { qend = qstart + qtilesize – 1; if (qend >= nq) qend = nq - 1; matrix_matrix_mult_tile(dst, src1, src2, nr, nc, nq, rstart, rend, cstart, cend, qstart,
qend); } /* for qstart */ } /* for cstart */ } /* for rstart */}
16
Multiplying Within a Tile: Cvoid matrix_matrix_mult_tile (
float** dst, float** src1, float** src2, int nr, int nc, int nq, int rstart, int rend, int cstart, int cend, int qstart, int qend){ int r, c, q;
for (r = rstart; r <= rend; r++) { for (c = cstart; c <= cend; c++) { if (qstart == 0) dst[r][c] = 0.0; for (q = qstart; q <= qend; q++) { dst[r][c] = dst[r][c] + src1[r][q] * src2[q]
[c]; } /* for q */ } /* for c */ } /* for r */} 17
Performance with Tiling
18
Matrix-Matrix Mutiply Via Tiling (log-log)
0.1
1
10
100
1000
101001000100001000001000000100000001E+08
Tile Size (bytes)
CP
U s
ec
512x256
512x512
1024x512
1024x1024
2048x1024
Matrix-Matrix Mutiply Via Tiling
0
50
100
150
200
250
10100100010000100000100000010000000100000000
Tile Size (bytes)
Better
The Advantages of Tiling
It allows your code to exploit data locality better, to get much more cache reuse: your code runs faster!
It’s a relatively modest amount of extra coding (typically a few wrapper functions and some changes to loop bounds).
If you don’t need tiling – because of the hardware, the compiler or the problem size – then you can turn it off by simply setting the tile size equal to the problem size.
19
Will Tiling Always Work?
Tiling WON’T always work. Why?
Well, tiling works well when: the order in which calculations occur doesn’t
matter much, AND there are lots and lots of calculations to do
for each memory movement.
If either condition is absent, then tiling won’t help.
20
Parallelism
22
Parallelism
Less fish …
More fish!
Parallelism means doing multiple things at the same time: you can get more work done in the same time.
23
The Jigsaw Puzzle Analogy
24
Serial ComputingSuppose you want to do a jigsaw puzzlethat has, say, a thousand pieces.
We can imagine that it’ll take you acertain amount of time. Let’s saythat you can put the puzzle together inan hour.
25
Shared Memory Parallelism
If Scott sits across the table from you, then he can work on his half of the puzzle and you can work on yours. Once in a while, you’ll both reach into the pile of pieces at the same time (you’ll contend for the same resource), which will cause a little bit of slowdown. And from time to time you’ll have to work together (communicate) at the interface between his half and yours. The speedup will be nearly 2-to-1: y’all might take 35 minutes instead of 30.
26
The More the Merrier?Now let’s put Paul and Charlie on the other two sides of the table. Each of you can work on a part of the puzzle, but there’ll be a lot more contention for the shared resource (the pile of puzzle pieces) and a lot more communication at the interfaces. So y’all will get noticeably less than a 4-to-1 speedup, but you’ll still have an improvement, maybe something like 3-to-1: the four of you can get it done in 20 minutes instead of an hour.
27
Diminishing ReturnsIf we now put Dave and Tom and Horst and Brandon on the corners of the table, there’s going to be a whole lot of contention for the shared resource, and a lot of communication at the many interfaces. So the speedup y’all get will be much less than we’d like; you’ll be lucky to get 5-to-1.
So we can see that adding more and more workers onto a shared resource is eventually going to have a diminishing return.
28
Distributed Parallelism
Now let’s try something a little different. Let’s set up two tables, and let’s put you at one of them and Scott at the other. Let’s put half of the puzzle pieces on your table and the other half of the pieces on Scott’s. Now y’all can work completely independently, without any contention for a shared resource. BUT, the cost per communication is MUCH higher (you have to scootch your tables together), and you need the ability to split up (decompose) the puzzle pieces reasonably evenly, which may be tricky to do for some puzzles.
29
More Distributed Processors
It’s a lot easier to add more processors in distributed parallelism. But, you always have to be aware of the need to decompose the problem and to communicate among the processors. Also, as you add more processors, it may be harder to load balance the amount of work that each processor gets.
30
Load Balancing
Load balancing means ensuring that everyone completes their workload at roughly the same time.
For example, if the jigsaw puzzle is half grass and half sky, then you can do the grass and Scott can do the sky, and then y’all only have to communicate at the horizon – and the amount of work that each of you does on your own is roughly equal. So you’ll get pretty good speedup.
31
Load Balancing
Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard.
EASY
HARD
32CS 491 – Parallel and Distributed Computing
33
References[1] Image by Greg Bryan, Columbia U.[2] “Update on the Collaborative Radar Acquisition Field Test (CRAFT): Planning for the Next Steps.” Presented to NWS Headquarters August 30 2001.[3] See http://hneeman.oscer.ou.edu/hamr.html for details.[4] http://www.dell.com/[5] http://www.vw.com/newbeetle/[6] Richard Gerber, The Software Optimization Cookbook: High-performance Recipes for the Intel Architecture. Intel Press, 2002, pp. 161-168.[7] RightMark Memory Analyzer. http://cpu.rightmark.org/[8] ftp://download.intel.com/design/Pentium4/papers/24943801.pdf[9] http://www.seagate.com/cda/products/discsales/personal/family/0,1085,621,00.html[10] http://www.samsung.com/Products/OpticalDiscDrive/SlimDrive/OpticalDiscDrive_SlimDrive_SN_S082D.asp?page=Specifications[11] ftp://download.intel.com/design/Pentium4/manuals/24896606.pdf[12] http://www.pricewatch.com/
34
Condor Pool Condor is a software technology
that allows idle desktop PCs to be used for number crunching.
OU IT has deployed a large Condor pool (773 desktop
PCs in IT student labs all over campus).
It provides a huge amount of additional computing power – more than was available in all of OSCER in 2005.
13+ TFLOPs peak compute speed.
And, the cost is very very low – almost literally free.
Also, we’ve been seeing empirically that Condor gets about 80% of each PC’s time.