Database and MapReduce
Based on slides from Jimmy Lin’s lecture slides (http://www.umiacs.umd.edu/~jimmylin/clou
d-2010-Spring/index.html) (licensed under Creation Commons Attribution 3.0 License)
Slides from Dr. Amr Awadallah’s Hadoop talk at Stanford, CTO & VPE from Cloudera
Working Scenario
• Two tables:– User demographics (gender, age, income, etc.)– User page visits (URL, time spent, etc.)
• Analyses we might want to perform:– Statistics on demographic characteristics– Statistics on page visits– Statistics on page visits by URL– Statistics on page visits by demographic characteristic– …
Relational Algebra
• Primitives– Projection ()– Selection ()– Cartesian product ()– Set union ()– Set difference ()– Rename ()
• Other operations– Join ( )⋈– Group by… aggregation– …
Design Pattern: Secondary Sorting
• MapReduce sorts input to reducers by key– Values are arbitrarily ordered
• What if want to sort value also?– E.g., k → (v1, r), (v3, r), (v4, r), (v8, r)…
Secondary Sorting: Solutions
• Solution 1:– Buffer values in memory, then sort– Why is this a bad idea?
• Solution 2:– “Value-to-key conversion” design pattern: form
composite intermediate key, (k, v1)– Let execution framework do the sorting– Preserve state across multiple key-value pairs to handle
processing– Anything else we need to do?
Value-to-Key Conversion
k → (v1, r), (v4, r), (v8, r), (v3, r)…
(k, v1) → (v1, r)
Before
After
(k, v3) → (v3, r)(k, v4) → (v4, r)(k, v8) → (v8, r)
Values arrive in arbitrary order…
…
Values arrive in sorted order…Process by preserving state across multiple keys
Remember to partition correctly!
Projection
R1
R2
R3
R4
R5
R1
R2
R3
R4
R5
Projection in MapReduce
• Easy!– Map over tuples, emit new tuples with appropriate
attributes– No reducers, unless for regrouping or resorting tuples– Alternatively: perform in reducer, after some other
processing• Basically limited by HDFS streaming speeds
– Speed of encoding/decoding tuples becomes important– Relational databases take advantage of compression– Semistructured data? No problem!
Selection
R1
R2
R3
R4
R5
R1
R3
Selection in MapReduce• Easy!
– Map over tuples, emit only tuples that meet criteria– No reducers, unless for regrouping or resorting tuples– Alternatively: perform in reducer, after some other
processing• Basically limited by HDFS streaming speeds
– Speed of encoding/decoding tuples becomes important
– Relational databases take advantage of compression– Semistructured data? No problem!
Group by… Aggregation
• Example: What is the average time spent per URL?
• In SQL:– SELECT url, AVG(time) FROM visits GROUP BY url
• In MapReduce:– Map over tuples, emit time, keyed by url– Framework automatically groups values by keys– Compute average in reducer– Optimize with combiners
Relational Joins
Source: Microsoft Office Clip Art
Relational Joins
R1
R2
R3
R4
S1
S2
S3
S4
R1 S2
R2 S4
R3 S1
R4 S3
Natural Join Operation – Example
• Relations r, s:A B
12412
C D
aabab
B
13123
D
aaabb
E
r
A B
11112
C D
aaaab
E
s
r s
Natural Join Example
R1 S1
R1 S1 =
sid sname rating age bid day
22 dustin 7 45.0 101 10/10/9658 rusty 10 35.0 103 11/12/96
sid sname rating age
22 dustin 7 45.0
31 lubber 8 55.5 58 rusty 10 35.0
sid bid day
22 101 10/10/96 58 103 11/12/96
Types of Relationships
One-to-OneOne-to-ManyMany-to-Many
Join Algorithms in MapReduce
• Reduce-side join• Map-side join• In-memory join
– Striped variant– Memcached variant
Reduce-side Join
• Basic idea: group by join key– Map over both sets of tuples– Emit tuple as value with join key as the intermediate key– Execution framework brings together tuples sharing the
same key– Perform actual join in reducer– Similar to a “sort-merge join” in database terminology
• Two variants– 1-to-1 joins– 1-to-many and many-to-many joins
Reduce-side Join: 1-to-1
R1
R4
S2
S3
R1
R4
S2
S3
keys valuesMap
R1
R4
S2
S3
keys values
Reduce
Note: no guarantee if R is going to come first or S
Reduce-side Join: 1-to-many
R1
S2
S3
R1
S2
S3
S9
keys valuesMap
R1 S2
keys values
Reduce
S9
S3 …
What’s the problem?
Reduce-side Join: V-to-K Conversion
R1
keys values
In reducer…
S2
S3
S9
R4
S3
S7
New key encountered: hold in memory
Cross with records from other set
New key encountered: hold in memory
Cross with records from other set
Reduce-side Join: many-to-many
R1
keys values
In reducer…
S2
S3
S9
Hold in memory
Cross with records from other set
R5
R8
What’s the problem?
Map-side Join: Basic Idea
Assume two datasets are sorted by the join key:
R1
R2
R3
R4
S1
S2
S3
S4
A sequential scan through both datasets to join(called a “merge join” in database terminology)
Map-side Join: Parallel Scans
• If datasets are sorted by join key, join can be accomplished by a scan over both datasets
• How can we accomplish this in parallel?– Partition and sort both datasets in the same manner
• In MapReduce:– Map over one dataset, read from other corresponding
partition– No reducers necessary (unless to repartition or resort)
• Consistently partitioned datasets: realistic to expect?
In-Memory Join
• Basic idea: load one dataset into memory, stream over other dataset– Works if R << S and R fits into memory– Called a “hash join” in database terminology
• MapReduce implementation– Distribute R to all nodes– Map over S, each mapper loads R in memory, hashed by
join key– For every tuple in S, look up join key in R– No reducers, unless for regrouping or resorting tuples
In-Memory Join: Variants
• Striped variant:– R too big to fit into memory? – Divide R into R1, R2, R3, … s.t. each Rn fits into memory
– Perform in-memory join: n, Rn S⋈
– Take the union of all join results• Memcached join:
– Load R into memcached– Replace in-memory hash lookup with memcached
lookup
Memcached
Database layer: 800 eight-core Linux servers running MySQL (40 TB user data)
Caching servers: 15 million requests per second, 95% handled by memcache (15 TB of RAM)
Source: Technology Review (July/August, 2008)
Memcached Join
• Memcached join:– Load R into memcached– Replace in-memory hash lookup with memcached
lookup• Capacity and scalability?
– Memcached capacity >> RAM of individual node– Memcached scales out with cluster
• Latency?– Memcached is fast (basically, speed of network)– Batch requests to amortize latency costs
Source: See tech report by Lin et al. (2009)
Which join to use?
• In-memory join > map-side join > reduce-side join– Why?
• Limitations of each?– In-memory join: memory– Map-side join: sort order and partitioning– Reduce-side join: general purpose
Processing Relational Data: Summary
• MapReduce algorithms for processing relational data:– Group by, sorting, partitioning are handled automatically by
shuffle/sort in MapReduce– Selection, projection, and other computations (e.g.,
aggregation), are performed either in mapper or reducer– Multiple strategies for relational joins
• Complex operations require multiple MapReduce jobs– Example: top ten URLs in terms of average time spent– Opportunities for automatic optimization
Evolving roles for relational database and MapReduce
Slides from Dr. Amr Awadallah’s Hadoop talk at Stanford, CTO & VPE from Cloudera
OLTP/OLAP/Hadoop Architecture
OLTP OLAP
ETL(Extract, Transform, and Load)
Hadoop
Why does this make sense?
Hive and Pig
Need for High-Level Languages
• Hadoop is great for large-data processing!– But writing Java programs for everything is
verbose and slow– Analysts don’t want to (or can’t) write Java
• Solution: develop higher-level data processing languages– Hive: HQL is like SQL– Pig: Pig Latin is a bit like Perl
Hive and Pig
• Hive: data warehousing application in Hadoop– Query language is HQL, variant of SQL– Tables stored on HDFS as flat files– Developed by Facebook, now open source
• Pig: large-scale data processing system– Scripts are written in Pig Latin, a dataflow language– Developed by Yahoo!, now open source– Roughly 1/3 of all Yahoo! internal jobs
• Common idea:– Provide higher-level language to facilitate large-data processing– Higher-level language “compiles down” to Hadoop jobs
Hive: Background
• Started at Facebook• Data was collected by nightly cron jobs into
Oracle DB• “ETL” via hand-coded python• Grew from 10s of GBs (2006) to 1 TB/day new
data (2007), now 10x that
Source: cc-licensed slide by Cloudera
Hive Components
• Shell: allows interactive queries• Driver: session handles, fetch, execute• Compiler: parse, plan, optimize• Execution engine: DAG of stages (MR, HDFS,
metadata)• Metastore: schema, location in HDFS, SerDe
Source: cc-licensed slide by Cloudera
Data Model
• Tables– Typed columns (int, float, string, boolean)– Also, list: map (for JSON-like data)
• Partitions– For example, range-partition tables by date
• Buckets– Hash partitions within ranges (useful for sampling,
join optimization)
Source: cc-licensed slide by Cloudera
Metastore
• Database: namespace containing a set of tables
• Holds table definitions (column types, physical layout)
• Holds partitioning information• Can be stored in Derby, MySQL, and many
other relational databases
Source: cc-licensed slide by Cloudera
Physical Layout
• Warehouse directory in HDFS– E.g., /user/hive/warehouse
• Tables stored in subdirectories of warehouse– Partitions form subdirectories of tables
• Actual data stored in flat files– Control char-delimited text, or SequenceFiles– With custom SerDe, can use arbitrary format
Source: cc-licensed slide by Cloudera
Hive: Example Hive looks similar to an SQL database
Relational join on two tables: Table of word counts from Shakespeare collection Table of word counts from the bible
Source: Material drawn from Cloudera training VM
SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10;
the 25848 62394I 23031 8854and 19671 38985to 18038 13526of 16700 34654a 14170 8057you 12702 2720my 11297 4135in 10797 12445is 8882 6884
Hive: Behind the Scenes
SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10;
(TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10)))
(one or more of MapReduce jobs)
(Abstract Syntax Tree)
Hive: Behind the ScenesSTAGE DEPENDENCIES: Stage-1 is a root stage Stage-2 depends on stages: Stage-1 Stage-0 is a root stage
STAGE PLANS: Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: s TableScan alias: s Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: expr: word type: string tag: 0 value expressions: expr: freq type: int expr: word type: string k TableScan alias: k Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: expr: word type: string tag: 1 value expressions: expr: freq type: int
Reduce Operator Tree: Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {VALUE._col0} {VALUE._col1} 1 {VALUE._col0} outputColumnNames: _col0, _col1, _col2 Filter Operator predicate: expr: ((_col0 >= 1) and (_col2 >= 1)) type: boolean Select Operator expressions: expr: _col1 type: string expr: _col0 type: int expr: _col2 type: int outputColumnNames: _col0, _col1, _col2 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.SequenceFileInputFormat output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
Stage: Stage-2 Map Reduce Alias -> Map Operator Tree: hdfs://localhost:8022/tmp/hive-training/364214370/10002 Reduce Output Operator key expressions: expr: _col1 type: int sort order: - tag: -1 value expressions: expr: _col0 type: string expr: _col1 type: int expr: _col2 type: int Reduce Operator Tree: Extract Limit File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0 Fetch Operator limit: 10
Example Data Analysis Task
user url time
Amy www.cnn.com 8:00
Amy www.crap.com 8:05
Amy www.myblog.com 10:00
Amy www.flickr.com 10:05
Fred cnn.com/index.htm 12:00
url pagerank
www.cnn.com 0.9
www.flickr.com 0.9
www.myblog.com 0.7
www.crap.com 0.2
Find users who tend to visit “good” pages.
PagesVisits
. . .
. . .
Pig Slides adapted from Olston et al.
Conceptual Dataflow
Canonicalize URLs
Joinurl = url
Group by user
Compute Average Pagerank
FilteravgPR > 0.5
LoadPages(url, pagerank)
LoadVisits(user, url, time)
Pig Slides adapted from Olston et al.
Pig Latin Script
Visits = load ‘/data/visits’ as (user, url, time);Visits = foreach Visits generate user, Canonicalize(url), time;
Pages = load ‘/data/pages’ as (url, pagerank);
VP = join Visits by url, Pages by url;UserVisits = group VP by user;UserPageranks = foreach UserVisits generate user, AVG(VP.pagerank) as avgpr;GoodUsers = filter UserPageranks by avgpr > ‘0.5’;
store GoodUsers into '/data/good_users';
Pig Slides adapted from Olston et al.
System-Level Dataflow
. . . . . .
Visits Pages. . .
. . . join by url
the answer
loadload
canonicalize
compute average pagerankfilter
group by user
Pig Slides adapted from Olston et al.
MapReduce Codeimport java.io.IOException; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hadoop.mapred.FileOutputFormat; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.KeyValueTextInputFormat; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.RecordReader; import org.apache.hadoop.mapred.Reducer; import org.apache.hadoop.mapred.Reporter; import org.apache.hadoop.mapred.SequenceFileInputFormat; import org.apache.hadoop.mapred.SequenceFileOutputFormat; import org.apache.hadoop.mapred.TextInputFormat; import org.apache.hadoop.mapred.jobcontrol.Job; import org.apache.hadoop.mapred.jobcontrol.JobControl; import org.apache.hadoop.mapred.lib.IdentityMapper; public class MRExample { public static class LoadPages extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text> { public void map(LongWritable k, Text val, OutputCollector<Text, Text> oc, Reporter reporter) throws IOException { // Pull the key out String line = val.toString(); int firstComma = line.indexOf(','); String key = line.substring(0, firstComma); String value = line.substring(firstComma + 1); Text outKey = new Text(key); // Prepend an index to the value so we know which file // it came from. Text outVal = new Text("1" + value); oc.collect(outKey, outVal); } } public static class LoadAndFilterUsers extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text> { public void map(LongWritable k, Text val, OutputCollector<Text, Text> oc, Reporter reporter) throws IOException { // Pull the key out String line = val.toString(); int firstComma = line.indexOf(','); String value = line.substring(firstComma + 1); int age = Integer.parseInt(value); if (age < 18 || age > 25) return; String key = line.substring(0, firstComma); Text outKey = new Text(key); // Prepend an index to the value so we know which file // it came from. Text outVal = new Text("2" + value); oc.collect(outKey, outVal); } } public static class Join extends MapReduceBase implements Reducer<Text, Text, Text, Text> { public void reduce(Text key, Iterator<Text> iter, OutputCollector<Text, Text> oc, Reporter reporter) throws IOException { // For each value, figure out which file it's from and store it // accordingly. List<String> first = new ArrayList<String>(); List<String> second = new ArrayList<String>(); while (iter.hasNext()) { Text t = iter.next(); String value = t.toString(); if (value.charAt(0) == '1') first.add(value.substring(1)); else second.add(value.substring(1));
reporter.setStatus("OK"); } // Do the cross product and collect the values for (String s1 : first) { for (String s2 : second) { String outval = key + "," + s1 + "," + s2; oc.collect(null, new Text(outval)); reporter.setStatus("OK"); } } } } public static class LoadJoined extends MapReduceBase implements Mapper<Text, Text, Text, LongWritable> { public void map( Text k, Text val, OutputCollector<Text, LongWritable> oc, Reporter reporter) throws IOException { // Find the url String line = val.toString(); int firstComma = line.indexOf(','); int secondComma = line.indexOf(',', firstComma); String key = line.substring(firstComma, secondComma); // drop the rest of the record, I don't need it anymore, // just pass a 1 for the combiner/reducer to sum instead. Text outKey = new Text(key); oc.collect(outKey, new LongWritable(1L)); } } public static class ReduceUrls extends MapReduceBase implements Reducer<Text, LongWritable, WritableComparable, Writable> { public void reduce( Text key, Iterator<LongWritable> iter, OutputCollector<WritableComparable, Writable> oc, Reporter reporter) throws IOException { // Add up all the values we see long sum = 0; while (iter.hasNext()) { sum += iter.next().get(); reporter.setStatus("OK"); } oc.collect(key, new LongWritable(sum)); } } public static class LoadClicks extends MapReduceBase implements Mapper<WritableComparable, Writable, LongWritable, Text> { public void map( WritableComparable key, Writable val, OutputCollector<LongWritable, Text> oc, Reporter reporter) throws IOException { oc.collect((LongWritable)val, (Text)key); } } public static class LimitClicks extends MapReduceBase implements Reducer<LongWritable, Text, LongWritable, Text> { int count = 0; public void reduce( LongWritable key, Iterator<Text> iter, OutputCollector<LongWritable, Text> oc, Reporter reporter) throws IOException { // Only output the first 100 records while (count < 100 && iter.hasNext()) { oc.collect(key, iter.next()); count++; } } } public static void main(String[] args) throws IOException { JobConf lp = new JobConf(MRExample.class); lp.setJobName("Load Pages"); lp.setInputFormat(TextInputFormat.class);
lp.setOutputKeyClass(Text.class); lp.setOutputValueClass(Text.class); lp.setMapperClass(LoadPages.class); FileInputFormat.addInputPath(lp, new Path("/user/gates/pages")); FileOutputFormat.setOutputPath(lp, new Path("/user/gates/tmp/indexed_pages")); lp.setNumReduceTasks(0); Job loadPages = new Job(lp); JobConf lfu = new JobConf(MRExample.class); lfu.setJobName("Load and Filter Users"); lfu.setInputFormat(TextInputFormat.class); lfu.setOutputKeyClass(Text.class); lfu.setOutputValueClass(Text.class); lfu.setMapperClass(LoadAndFilterUsers.class); FileInputFormat.addInputPath(lfu, new Path("/user/gates/users")); FileOutputFormat.setOutputPath(lfu, new Path("/user/gates/tmp/filtered_users")); lfu.setNumReduceTasks(0); Job loadUsers = new Job(lfu); JobConf join = new JobConf(MRExample.class); join.setJobName("Join Users and Pages"); join.setInputFormat(KeyValueTextInputFormat.class); join.setOutputKeyClass(Text.class); join.setOutputValueClass(Text.class); join.setMapperClass(IdentityMapper.class); join.setReducerClass(Join.class); FileInputFormat.addInputPath(join, new Path("/user/gates/tmp/indexed_pages")); FileInputFormat.addInputPath(join, new Path("/user/gates/tmp/filtered_users")); FileOutputFormat.setOutputPath(join, new Path("/user/gates/tmp/joined")); join.setNumReduceTasks(50); Job joinJob = new Job(join); joinJob.addDependingJob(loadPages); joinJob.addDependingJob(loadUsers); JobConf group = new JobConf(MRExample.class); group.setJobName("Group URLs"); group.setInputFormat(KeyValueTextInputFormat.class); group.setOutputKeyClass(Text.class); group.setOutputValueClass(LongWritable.class); group.setOutputFormat(SequenceFileOutputFormat.class); group.setMapperClass(LoadJoined.class); group.setCombinerClass(ReduceUrls.class); group.setReducerClass(ReduceUrls.class); FileInputFormat.addInputPath(group, new Path("/user/gates/tmp/joined")); FileOutputFormat.setOutputPath(group, new Path("/user/gates/tmp/grouped")); group.setNumReduceTasks(50); Job groupJob = new Job(group); groupJob.addDependingJob(joinJob); JobConf top100 = new JobConf(MRExample.class); top100.setJobName("Top 100 sites"); top100.setInputFormat(SequenceFileInputFormat.class); top100.setOutputKeyClass(LongWritable.class); top100.setOutputValueClass(Text.class); top100.setOutputFormat(SequenceFileOutputFormat.class); top100.setMapperClass(LoadClicks.class); top100.setCombinerClass(LimitClicks.class); top100.setReducerClass(LimitClicks.class); FileInputFormat.addInputPath(top100, new Path("/user/gates/tmp/grouped")); FileOutputFormat.setOutputPath(top100, new Path("/user/gates/top100sitesforusers18to25")); top100.setNumReduceTasks(1); Job limit = new Job(top100); limit.addDependingJob(groupJob); JobControl jc = new JobControl("Find top 100 sites for users 18 to 25"); jc.addJob(loadPages); jc.addJob(loadUsers); jc.addJob(joinJob); jc.addJob(groupJob); jc.addJob(limit); jc.run(); } }
Pig Slides adapted from Olston et al.
Java vs. Pig Latin
020406080
100120140160180
Hadoop Pig
1/20 the lines of code
0
50
100
150
200
250
300
Hadoop Pig
Min
ute
s
1/16 the development time
Performance on par with raw Hadoop!
Pig Slides adapted from Olston et al.
Pig takes care of… Schema and type checking
Translating into efficient physical dataflow (i.e., sequence of one or more MapReduce jobs)
Exploiting data reduction opportunities (e.g., early partial aggregation via a combiner)
Executing the system-level dataflow (i.e., running the MapReduce jobs)
Tracking progress, errors, etc.