mapreduce - university of cambridgeey204/teaching/acs/r244_2017_2018/... · • the same map() and...

29
MapReduce Simplified Data Processing on Large Clusters Stefanos Laskaridis [email protected] by J. Dean and S. Ghemawat R244: Large-Scale Data Processing and Optimisation 1

Upload: others

Post on 06-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

MapReduceSimplified Data Processing on Large Clusters

Stefanos [email protected]

by J. Dean and S. Ghemawat

R244: Large-Scale Data Processing and Optimisation1

Page 2: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Structure

• MapReduce motives

• Programming Model & Architecture

• Comparison with relevant work

• Results

• Critique

2

Page 3: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Disclaimer

• We will not refer to:

• GFS/HDFS [2,4]

• Hadoop [4]

3

Page 4: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

MapReduce Motives

4

Page 5: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

A use case

5

Page 6: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Data Mining Wars

A Long Long Time ago (2004), in a galaxy not so far away, there were programmers who wanted to run distributed

jobs.

A big company, named Google, was running many of those.

imagine running a query of how many google searches a user in Cambridge does during Michaelmas term.

What would you do?

6

Page 7: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Approach

• Write a job that would scan through the data and calculate the average.

• You would probably want it to be distributed.

1. Find an interface to the distributed filesystem or distribute the data.

2. Write a parallel program that splits the work in many threads/processes.

3. Make sure that you handle hardware or other failures with minimal data losses.

4. Get intermediate results (may not fit in one machine memory)

5. Write and execute your query

7

Page 8: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Problem

• Too much focus on preparing the workflow rather that the actual computation.

• Complex code that obscures the actual implementation.

• Generally harder to understand

• and maintain

8

Page 9: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

MapReduce Era~ 2003 - 2014

9

Page 10: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

MapReduce (MR)

• A programming model

• Based on 2 functions of functional programming

• map(): (k,v) => list(k1,v1) execute a function for every element in a collection

• reduce(): (k1,v1) => list(v2) aggregate results by key based on a function

10

Page 11: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

MR Model

11

i n p u t

o u t p u t

Map

Map

Map

Map

Reduce

ReduceS h u f f l e

Map Phase Reduce Phase

Page 12: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

MR Architecture

12

Page 13: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Notable Refinements• Partitioning function

• Ordering guarantee

• Skipping bad records

• Backup tasks

• Distributed counters

• Status information infrastructure (HTTP server)

13

Page 14: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Failure Semantics

• Master pings workers

• Map worker failure => re-execute map

• Failed map execution

• Error after map execution (data still on local disk)

• Reduce worker failure => re-execute reduce

14

Page 15: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Relevant Work

15

Page 16: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Relevant Work

Active disks Locality optimisation

CharlotteBackup task mechanism

Condor Cluster manager

NOW-Sort Sorting facility

BAD-FS Redundant execution

Network traffic minimisation

TACCRe-execution for fault-

tolerance

MapReduce

“Simplification and distillation of some […] models” [1]

16

Page 17: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Relevant Work

BSP/MPIHigher level of

abstractionNo transparent fault-

tolerance

River Non-skewed

completion times through careful

scheduling*

vs MapReduce

17* vs fine-grained task partitioning

Page 18: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Results

18

Page 19: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Experiment SetupGrep experiment

• 1010 100-byte records (1TB of data)

• Text occurence: 0,00092336%

• M=15,000 (64MB)

• R=1

Sorting experiment

• 1010 100-byte records (1TB of data)

• 10-byte sort key

• M=15,000 (64MB)

• R=4,000

Equipment

• 2GHz Intel® Xeon® Processors with HyperThreading

• 160GB IDE disks

• 4GB of memory (2-2.5GB available)

• Gigabit Ethernet link

• 100-200Gbps aggregate bandwidth

X

Page 20: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Results• Grep task

Average throughput: ~66GHz

• Sort task Average throughput: ~11GHz

• Very scalable*

• Backup tasks and fault tolerance do work

• ~81% code reduction for Google’s Web Search service production indexing system

* s.t. Amdahl's law

• Usage

• Machine Learning algorithms

• Clustering for Google News

• Reports for popular queries for Google Zeitgeist

• Properties extraction from crawled webpages

• Graph computations

19

Page 21: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Why MapReduce?

• Abstraction for programmer

• Automatic parallelisation

• Almost linear scalability

• Load-balancing

• Fault-tolerance

• Locality optimisation

• Runs on commodity hardware

• Easy large-scale prototyping

20

Page 22: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Critique

21

Page 23: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Restrictive Model• The model of execution is too restrictive.

• The same map() and reduce() function on all data.Only allows for data parallelisation.

• Inefficient for iterative update algorithms. Need of job pipelining. [6](e.g. many Machine learning algorithms)

Map Reduce …Map Reduce Map Reduce

22

Page 24: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Optimisations

• No distributed data query plan

• No context awareness between different jobs

• Large startup time for job propagation

• No caching or indexing [5,6]

23

Page 25: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Considerations on theMR Master

• Single point of failure

• At scale, point of congestion for communications

Master

24

Page 26: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

• Pull-mode remote reads from reducers

• Multiple reduce workers reading different files from the same map worker, leads to high disk seek times [5].

Disk seeks

Requests =

25

Page 27: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

–Urs Hölzle [3]SVP Technical

Infrastructure Google

“We don’t really use MapReduce anymore”

26

Page 28: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

Thank youQ&A

Stefanos [email protected]

27

Page 29: MapReduce - University of Cambridgeey204/teaching/ACS/R244_2017_2018/... · • The same map() and reduce() function on all data. Only allows for data parallelisation. • Inefficient

References1. Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified Data Processing on Large Clusters. Commun.

ACM, 51(1), 107–113. https://doi.org/10.1145/1327452.1327492

2. Ghemawat, S., Gobioff, H., & Leung, S.-T. (2003). The Google File System. SIGOPS Oper. Syst. Rev., 37(5), 29–43. https://doi.org/10.1145/1165389.945450

3. Hölzle U., Google I/O (2014)

4. Shvachko, K., Kuang, H., Radia, S., & Chansler, R. (2010). The Hadoop Distributed File System. In Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST) (pp. 1–10). Washington, DC, USA: IEEE Computer Society. https://doi.org/10.1109/MSST.2010.5496972

5. Stonebraker, M., Abadi, D., DeWitt, D. J., Madden, S., Paulson, E., Pavlo, A., & Rasin, A. (2010). MapReduce and Parallel DBMSs: Friends or Foes? Commun. ACM, 53(1), 64–71. https://doi.org/10.1145/1629175.1629197

6. Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., McCauly, M., … Stoica, I. (2012). Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing. In Presented as part of the 9th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 12) (pp. 15–28). San Jose, CA: USENIX. Retrieved from https://www.usenix.org/conference/nsdi12/technical-sessions/presentation/zaharia

28