failing in place for low-serviceability infrastructure using high-parity gpu-based raid

13
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. Failing in Place for Low- Serviceability Infrastructure Using High-Parity GPU-Based RAID Matthew L. Curry, University of Alabama at Birmingham and Sandia National Laboratories, [email protected] H. Lee Ward, Sandia National Laboratories, [email protected] Anthony Skjellum, University of Alabama at Birmingham, [email protected]

Upload: cicily

Post on 22-Feb-2016

19 views

Category:

Documents


0 download

DESCRIPTION

Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID. Matthew L. Curry, University of Alabama at Birmingham and Sandia National Laboratories, [email protected] H. Lee Ward, Sandia National Laboratories, [email protected] - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-

Based RAID

Matthew L. Curry, University of Alabama at Birmingham and Sandia National Laboratories, [email protected]

H. Lee Ward, Sandia National Laboratories, [email protected] Skjellum, University of Alabama at Birmingham,

[email protected]

Page 2: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Introduction• RAID increases speed and reliability of disk

arrays• Mainstream technology limits fault tolerance to

two arbitrary failed disks in volume at a time (RAID 6)

• Always-on and busy systems that do not have service periods are at increased risk of failure– Hot spares can decrease rebuild time, but leave

installations vulnerable for days at a time• High-parity GPU-based RAID can decrease

long-term risk of data loss 2

Page 3: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Motivation: Inaccurate Disk Failure Statistics

• Mean Time To Data Loss of RAID often assumes manufacturer statistics are accurate– Disks are 2-10x more likely to fail than

manufacturers estimate*– Estimates assume low replacement latency

and fast rebuilds… Without load

3

*Bianca Schroeder and Garth A. Gibson. “Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?” In Proceedings of the 5th USENIX Conference on File and Storage Technologies, Berkeley, CA, USA, 2007. USENIX Association.

Page 4: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Motivation: Unrecoverable Read Errors

4

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 990

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

BER = 10^-14 BER = 10^-15 BER = 10^-16

Terabytes Read

Prob

abili

ty o

f Suc

cess

(No

Erro

r)

Page 5: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Result: Hot Spares Are Inadequate Under Load

5

0 50 100 150 200 250 300

1E-04

1E-03

1E-02

1E-01

1E+00

RAID 5+0 (4 sets)RAID 6+0 (4 sets)RAID 1+0 (3-way replication)

Data Capacity (TB, using 2TB Disks)

Prob

abili

ty o

f Dat

a Lo

ss W

ithin

Te

n Ye

ars

MTTR = One Week, MTTF = 100,000 Hours, Lifetime = 10 years

Page 6: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Solution: High-Parity RAID• Use Reed-Solomon Coding to provide much higher fault tolerance

– Configure array to tolerate failure of any m disks, where m can vary widely

– Analogue: RAID 5 has m=1, RAID 6 has m=2, RAID 6+0 also has m=2– Spare disks, instead of remaining idle, participate actively in array,

eliminating window of vulnerability• This is very computationally expensive

– k+m RAID, where k is number of data blocks per stripe and m is number of parity blocks per stripe, requires O(m) operations per byte written

– Table lookups, so no vector-based parallelism with x86/x86-64– Failed disks in array operate in degraded mode, which requires the

same computational load for reads as writes

6

Page 7: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Use GPUs to Provide Fast RAID Computations

• Reed-Solomon coding accesses a look-up table

• NVIDIA CUDA architecture supplies several banks of memory, accessible in parallel

• 1.82 look-ups per cycle per SM on average

• GeForce GTX 285 has 30 SMs, so can satisfy 55 look-ups per cycle per device– Compare to one per core in x86

processors

7

Compute Core

Shared Memory Bank

Page 8: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Performance: GeForce GTX 285 vs. Intel Extreme Edition 975

• GPU capable of providing high bandwidth at 6x-10x CPU speeds

• New memory layout and matrix generation algorithm for Reed-Solomon yields equivalent write and degraded read performance

8

2 6 10 14 18 22 26 300

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

GPU m=2GPU m=3GPU m=4CPU m=2CPU m=3CPU m=4

Number of Data Buffers

Codi

ng T

hrou

ghpu

t (M

B/s)

Page 9: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

A User Space RAID Architecture and Data Flow

• GPU computing facilities are inaccessible from kernel space

• Provide I/O stack components in user space, accessible via iSCSI

– Accessible via loopback interface for direct-attached storage, as in this study

• Alternative: Micro-driver

Page 10: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Performance Results

10

• 10 GB streaming read/write operations to a 16-disk array

• Volumes mounted over loopback interface– stgt version used is bottleneck, later versions have

higher performance

GPU +2

GPU +4

0 100 200 300 400 500 600

WriteRead

Throughput (MB/s)

RA

ID T

ype

Page 11: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Advance: GPU-Based RAID• Prototype for general-purpose RAID

– Arbitrary parity– High-speed read verification– High-parity configuration for initial hardware

deployments• Guard against batch-correlated failures

• Flexibility not available with hardware RAID– Lower-bandwidth higher-reliability distributed data

stores

11

Page 12: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Future Work• UAB

– Data intensive “active storage” computation apps– Computation, Compression within the storage stack– NSF-funded AS/RAID testbed (.5 Petabyte)– Multi-level RAID architecture, failover, trade studies– Actually use the storage (e.g., ~200TB of 500TB)

• Sandia– Active/active failover configurations– Multi-level RAID architectures– Encryption within the storage stack

• Commercialization12

Page 13: Failing in Place for Low-Serviceability Infrastructure Using High-Parity GPU-Based RAID

Conclusions• RAID reliability is increasingly related to BER• Window of vulnerability during rebuild is

dangerous, needs to be eliminated if system stays busy 24x7– Hot spares are inadequate because of elongated

rebuild times under load• High-parity GPU RAID can eliminate window of

vulnerability while not harming performance for streaming workload

• Fast writes and degraded reads enabled by GPU make this feasible

13