dataflow systems extensions for graphs recursion

Post on 01-Jan-2016

41 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Extensions of MapReduce. Dataflow Systems Extensions for Graphs Recursion. Jeffrey D. Ullman Stanford University. Dataflow Systems. Arbitrary Acyclic Flow Among Tasks Preserving Fault Tolerance The Blocking Property. Generalization of MapReduce. - PowerPoint PPT Presentation

TRANSCRIPT

Extensions of MapReduceDataflow SystemsExtensions for GraphsRecursion

Jeffrey D. UllmanStanford University

2

Dataflow Systems

Arbitrary Acyclic Flow Among TasksPreserving Fault ToleranceThe Blocking Property

3

Generalization of MapReduce MapReduce uses only two functions (Map and

Reduce). Each is implemented by a rank of tasks. Data flows from Map tasks to Reduce tasks only.

4

Generalization – (2) Natural generalization is to allow any number

of functions, connected in an acyclic network. Each function implemented by tasks that feed

tasks of successor function(s). Key fault-tolerance (blocking) property: tasks

produce all their output at the end. Important point: Map tasks never deliver their

output until completed. Thus, we can restart a Map task that failed without

fear that a Reduce task has already used some output of the failed Map task.

5

Many Implementations

1. Clustera – University of Wisconsin.2. Hyracks – Univ. of California/Irvine.3. Dryad/DryadLINQ – Microsoft.4. Nephele/PACT – T. U. Berlin.5. BOOM – Berkeley.6. epiC – N. U. Singapore.

6

Example: Join + Aggregation Relations D(emp, dept) and S(emp, salary). Compute the sum of the salaries for each

department. D JOIN S computed by MapReduce.

But each Reduce task can also group its emp-dept-salary tuples by dept and sum the salaries.

A Third function is needed to take the dept-SUM(salary) pairs from each Reduce task, organize them by dept, and compute the final sum for each department.

7

3-Layer Dataflow

MapTasks

D

S

Join +GroupTasks

Hash byemp

FinalGroup +Aggre-gate

Hash bydept

8

Recursion

Transitive-Closure ExampleFault-Tolerance ProblemEndgame ProblemSome Systems and Approaches

9

Applications Requiring Recursion

1. PageRank, the original map-reduce application is really a recursion implemented by many rounds of map-reduce.

2. Analysis of social networks.3. Many machine-learning algorithms, e.g.,

gradient descent.4. PDE’s.

10

Transitive Closure

Many recursive applications involving large data are similar to transitive closure :

Path(X,Y) :- Arc(X,Y)Path(X,Y) :- Path(X,Z) & Path(Z,Y)

Path(X,Y) :- Arc(X,Y)Path(X,Y) :- Arc(X,Z) & Path(Z,Y)

Nonlinear. Takeslog n rounds on ann-node graph.

(Right) Linear. Takesn rounds on an n-nodegraph.

11

Implementing TC on a Cluster Use k tasks. Nonlinear recursion used here. Hash function h sends each node of the graph to

one of the k tasks. Task i receives and stores Path(a,b) if either h(a)

= i or h(b) = i, or both. Task i must join Path(a,c) with Path(c,b) if h(c) = i.

12

TC on a Cluster – Basis

Data is stored as relation Arc(a,b). “Map” tasks read chunks of the Arc relation and

send each tuple Arc(a,b) to recursive tasks h(a) and h(b). Treated as if it were tuple Path(a,b). If h(a) = h(b), only one task receives.

13

TC on a Cluster – Recursive Tasks

Task iPath(a,b)received

StorePath(a,b)if new.Otherwise,ignore.

Look upPath(b,c) and/orPath(d,a) forany c and d

Send Path(a,c) totasks h(a) and h(c);send Path(d,b) totasks h(d) and h(b)

14

Big Problem: Managing Failure MapReduce depends on the blocking property. Only then can you restart a failed task without

restarting the whole job. But any recursive task has to deliver some

output and later get more input.

15

HaLoop (U. Washington)

Iterates Hadoop, once for each round of the recursion. Uses Hadoop blocking-based fault tolerance.

Similar idea: Twister (U. Indiana). HaLoop tries to run each task in round i at a

compute node where it can find its needed output from round i – 1.

Also partitions and stores locally a file that is used at each round. Example: Arc in Path(X,Y) :- Arc(X,Z) & Path(Z,Y)

16

Pregel (Google) Views all computation as a recursion on some

graph. Nodes send messages to one another.

Messages bunched into supersteps, where each node processes all data received.

Sending individual messages would result in far too much overhead.

Checkpoint all compute nodes after some fixed number of supersteps.

On failure, rolls all tasks back to previous checkpoint.

17

Example: Shortest Paths Via Pregel

Node N

I found a pathfrom node M toyou of length L

5 3 6

I found a pathfrom node M toyou of length L+3

I found a pathfrom node M toyou of length L+5

I found a pathfrom node M toyou of length L+6

Is this theshortest path fromM I know about?

If so …

table ofshortestpathsto N

18

Other Graph-Oriented Systems Giraph: open-source Pregel. GraphLab: similar system that deals more

effectively with nodes of high degree. Will split the work for such a graph node among

several compute nodes.

19

Using Idempotence

Some recursive applications allow restart of tasks even if they have produced some output.

Example: TC is idempotent; you can send a task a duplicate Path fact without altering the result. But if you were counting paths, the answer would

be wrong.

20

Big Problem: The Endgame Some recursions, like TC, take a large number

of rounds, but the number of new discoveries in later rounds drops. T. Vassilakis: searches forward on the Web graph

can take hundreds of rounds. Problem: in a cluster, transmitting small files

carries much overhead.

21

Approach: Merge Tasks

Decide when to migrate tasks to fewer compute nodes.

Data for several tasks at the same node are combined into a single file and distributed at the receiving end.

Downside: old tasks have a lot of state to move.

Example: “paths seen so far.”

22

Approach: Modify Algorithms Nonlinear recursions can terminate in many

fewer steps than equivalent linear recursions. Avoids the endgame problem.

Example: TC. O(n) rounds on n-node graph for linear. O(log n) rounds for nonlinear.

23

Advantage of Linear TC

The communication cost (= sum of input sizes of all tasks) for executing linear TC is generally lower than that for nonlinear TC.

Why? Each path is discovered only once (unique-decomposition property). Note: distinct paths between the same endpoints

may each be discovered.

24

Example: Linear TC Arc + Path = Path

25

Nonlinear TC Constructs Path + Path = Path in Many Ways

26

Smart TC

(Valduriez-Boral, Ioannides) Construct a path from two paths:

1. The first has a length that is a power of 2.2. The second is no longer than the first.

27

Example: Smart TC

28

Other Nonlinear TC Algorithms You can have the unique-decomposition

property with many variants of nonlinear TC. Example: Balance constructs paths from two

equal-length paths. Favor first path when length is odd.

29

Example: Balance

30

Incomparability of TC Algorithms

On different graphs, any of the unique-decomposition algorithms – left-linear, right-linear, smart, balanced – could have the lowest data-volume cost.

Other unique-decomposition algorithms are possible and also could win.

31

Extension Beyond TC

Can you avoid the endgame problem by converting any linear recursion into an equivalent nonlinear recursion that requires logarithmic rounds?

Answer: Not always, without increasing arity and data size.

32

Positive Points

1. (Agarwal, Jagadish, Ness) All linear Datalog recursions reduce to TC.

2. Right-linear chain-rule Datalog programs can be replaced by nonlinear recursions with the same arity, logarithmic rounds, and the unique-decomposition property.

Each subgoal shares variablesonly with the next, in a circularsense that includes the head.

33

Example: Alternating-Color Paths

P(X,Y) :- Blue(X,Y)P(X,Y) :- Blue(X,Z) & Q(Z,Y)Q(X,Y) :- Red(X,Z) & P(Z,Y)

34

The Case of Reachability

Reach(X) :- Source(X)Reach(X) :- Reach(Y) & Arc(Y,X)

Takes linear rounds as stated. Can compute nonlinear TC to get Reach in

O(log n) rounds. But, then you compute O(n2) facts instead of

O(n) facts on an n-node graph.

35

Reachability – (2)

Theorem: If you compute Reach using only unary recursive predicates, then it must take (n) rounds on a graph of n nodes. Proof uses the ideas of Afrati, Cosmodakis, and

Yannakakis from a generation ago.

36

Summary: Recursion

Key problems are “endgame” and nonblocking nature of recursive tasks.

In some applications, endgame problem can be handled by using a nonlinear recursion that requires O(log n) rounds and has the unique-decomposition property.

37

Summary: Research Questions

1. How do you best support fault tolerance when tasks are nonblocking?

2. How do you manage tasks when the endgame problem cannot be avoided?

3. When can you replace linear recursion with nonlinear recursion requiring many fewer rounds, (roughly) the same communication cost, and (roughly) the same number of facts discovered?

top related