big data beyond hadoop*: research directions for the future
DESCRIPTION
Michael Wrinn Research Program Director, University Research Office, Intel Corporation Jason Dai Engineering Director and Principal Engineer, Intel CorporationTRANSCRIPT
Big Data Beyond Hadoop*: Research Directions for the Future
ACAS002
Jason Dai, Engineering Director and Principal Engineer, Software and Solutions Group Michael Wrinn, PhD, Research Program Director, University Research Office, Intel Labs
2
Agenda
• Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production
The PDF for this Session presentation is available from our Technical Session Catalog at the end of the day at: intel.com/go/idfsessionsBJ
URL is on top of Session Agenda Pages in Pocket Guide
3
Agenda
• Big data and the Hadoop* ecosystem • Intel university collaborations on bit data research • Efficient in-memory implementations of mapreduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production
4
What is Big Data? Big Data is data that is too big, too fast or too hard for existing systems and algorithms to handle • Too Big
– Terabytes going on petabytes – Smart (not brute force) massive parallelism required
• Too Fast – Sensor tagging everything creates a firehose – Ingest problem
• Too Hard – Complex analytics are required (e.g., to find patterns, trends
and relationships) – Need to combine diverse data types (No Schema, Uncurated,
Inconsistent Syntax and Semantics)
Samuel Madden ISTC Director and Professor EECS, MIT
Data should be a resource, not a load Existing data processing tools are not a good fit
5
Example: Web Analytics
Large web enterprises: thousands of servers, millions of users, and terabytes per day of “click data” Not just simple reporting: e.g., in real time, determine what users are likely to do next, or what ad to serve them, or which user they are most similar to Existing analytics systems either: do not scale to required volumes, or do not provide required sophistication
Samuel Madden ISTC Director and Professor EECS, MIT
6
Example: Sensor Analytics Smartphone providers tolling agencies municipalities insurance companies doctors businesses Capturing massive streams of video, position, acceleration, and other data from phones and other devices This data needs to be stored, processed, and mined, e.g., to measure traffic, driving risk, or medical prognosis
Samuel Madden ISTC Director and Professor EECS, MIT
7
Era of Data Exchange
Traditional Business Solutions Connecting to New Analytics Models for Real-time Value Opportunities
New Analytics Models
Cost-effective Vertical
Solutions
Compute Platform Topology Fabric
MIC
EP
EX
EXALYTICS
Traditional Business Solutions
Business Processing Innovation In-Memory DB – Integrated
Analytics –Systems & Appliances
Healthcare Energy - Scientific Manufacturing FSI eCommerce
Big Data
Hadoop* in the Big Data Ecosystem
8
Agenda
• Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production
9
Data Delivery Computing and Storage Platform
Data Management and Processing
Analytics
Data Usage Visualization End user tools
Apps Services
Intel Activity Landscape on Big Data
Distributed Machine Learning
(university collaborators)
Internet of Things / M2M (Intel Labs &
university collaborators)
Intel Software
Intel Architecture
Intel Labs
Intel IT
HiTune* and other tools for Hadoop
Business Intelligence and Hadoop*
Compression & Decompression IPs
Microservers
Hadoop Distribution & Service
Others
Trust Broker (McAfee*)
Location Based Service (Telmap)
End-to-End Data Security
Federated Device Architecture
Video Analytics
Distributed Video Analytics
Distributed Architecture (Guavus)
Healthcare, Telco, …
Large Object Storage
Corporate Data Solution Programs for Big Data and Analytics
Big Data Market
Sizing and Segmentation
(with Bain)
Hadoop performance & Architecture
10
Agenda
• Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production
11
Algorithms, Machines, People (AMPLab)
Adaptive/Active Machine Learning
and Analytics
Cloud Computing CrowdSourcing/
Human Computation
Massive and
Diverse Data
All software released as BSD Open Source
12
Berkeley Data Analysis System • Mesos*: resource management platform • SCADS: scale-independent storage systems • PIQL, Spark: processing frameworks
Higher Query Languages / Processing Frameworks
Resource Management
Storage
Mesos
AMPLab 3rd party
HDFS SCADS
Hadoop*
Hive* Pig* … MPI PIQL
Shark
Spark … …
13
Data Center Programming: Spark
• In-memory cluster computing framework for applications that reuse working sets of data – Iterative algorithms: machine learning, graph
processing, optimization – Interactive data mining: order of magnitude faster
than disk-based tools
• Key idea: RDDs “resilient distributed datasets” that can automatically be rebuilt on failure – Keep large working sets in memory – Fault tolerance mechanism based on “lineage”
14
Spark: Motivation
Complex jobs, interactive queries and online processing all need one thing that Hadoop* MR lacks: • Efficient primitives for data sharing
Sta
ge 1
Sta
ge 2
Sta
ge 3
Iterative job
Query 1
Query 2
Query 3
Interactive mining
Job
1
Job
2
…
Stream processing
15
Xfer and Sharing in Hadoop*
Iter. 1 Iter. 2 . . .
Input
HDFS read
HDFS write
HDFS read
HDFS write
Input
Query 1
Query 2
Query 3
Result 1
Result 2
Result 3
. . .
HDFS read
16
Iter. 1 Iter. 2 . . .
Input
Spark: In-Memory Data Sharing
Distributed memory
Input
Query 1
Query 2
Query 3
. . .
One-time processing
17
Introducing Shark
• Spark + Hive* (the SQL in NoSQL)
• Utilizes Spark’s in-memory RDD caching
and flexible language capabilities: result
reuse, and low latency
• Scalable, fault-tolerant, fast
• Query Compatible with Hive
18
Benchmarks: Query 1
SELECT * FROM grep WHERE field LIKE ‘%XYZ%’;
30GB input table
19
Benchmark: Query 2
5 GB input table
SELECT pagerank, pageURL FROM rankings WHERE pagerank > 10;
*
20
Agenda
• Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production
21
CPU 1 CPU 2 CPU 3 CPU 4
Data Parallelism (MapReduce)
1 2 . 9
4 2 . 3
2 1 . 3
2 5 . 8
2 4 . 1
8 4 . 3
1 8 . 4
8 4 . 4
1 7 . 5
6 7 . 5
1 4 . 9
3 4 . 3
Solve a huge number of independent subproblems
22
MapReduce for Data-Parallel ML
• Excellent for large data-parallel tasks!
Data-Parallel Graph-Parallel
Cross Validation
Feature Extraction
MapReduce
Computing Sufficient Statistics
Is there more to Machine Learning
?
23
Data
Machine Learning Pipeline
images
docs
movie ratings
Extract Features
faces
important words
side info
Graph Formation
similar faces
shared words
rated movies
Structured Machine Learning Algorithm
belief propagation
LDA
collaborative filtering
Value from Data
face labels
doc topics
movie recommend
24
Data
Parallelizing Machine Learning
Extract Features
Graph Formation Structured
Machine Learning Algorithm
Value from Data
Graph Ingress mostly data-parallel
Graph-Structured Computation
graph-parallel
25
Addressing Graph-Parallel ML
Data-Parallel Graph-Parallel
Cross Validation
Feature Extraction
Map Reduce
Computing Sufficient Statistics
Graphical Models
Gibbs Sampling Belief Propagation Variational Opt.
Semi-Supervised Learning
Label Propagation CoEM
Data-Mining PageRank
Triangle Counting
Collaborative Filtering
Tensor Factorization
Map Reduce? Graph-Parallel Abstraction
26
0
2
4
6
8
10
12
14
16
0 2 4 6 8 10 12 14 16
Sp
eed
up
Number of CPUs
Bett
er
Optimal
GraphLab CoEM
Example: Never Ending Learner Project (CoEM)
GraphLab 16 Cores 30 min
15x Faster! 6x fewer CPUs!
Hadoop* 95 Cores 7.5 hrs
Distributed GraphLab
32 EC2 machines
80 secs
0.3% of Hadoop time
27
Example: PageRank
40M Webpages, 1.4 Billion Links
GraphLab
Twister
Hadoop5.5 hrs.
1 hr.
8 min.
*
*
28
Agenda
• Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production
29
Intel’s Efforts on Hadoop*
• Intel® Distribution for Apache Hadoop*
– Performance, security and management – Downloadable from http://hadoop.intel.com/
• Intel’s open source initiatives for Hadoop – HiBench: comprehensive Hadoop benchmark suite
https://github.com/intel-hadoop/hibench – Project Panthera: efficient support of standard SQL features
on Hadoop https://github.com/intel-hadoop/project-panthera
– Project Rhino: enhanced data protection for the Apache Hadoop ecosystem https://github.com/intel-hadoop/project-rhino
– Graph Builder: scalable graph construction using Hadoop http://graphlab.org/intel-graphbuilder/
30
Using Spark/Shark for In-memory, Real-time Data Analysis
• Use case 1: ad-hoc & interactive queries – Interactive queries (exploratory ad-hoc queries, BI charting &
mining) – Similar projects: Google* Dremel, Facebook* Peregrine,
Cloudera* Impala, Apache* Drill, etc. (several seconds latency) – Use Shark/Spark to achieve close to sub-second latency for
interactive queries • Use case 2: in-memory, real-time analysis
– Iterative data mining, online analysis (e.g., loading table into memory for online analysis, caching intermediate results for iterative machine learning)
– Similar projects: Google PowerDrill – Use Shark/Spark to reliably load data in distributed memory for
online analysis
31
Using Spark/Shark for In-memory, Real-time Data Analysis
• Use case 3: stream processing – Streaming analysis, CEP (e.g., intrusion detection, real-time
statistics, etc.) – Similar projects: Twitter* Storm, Apache* S4, Facebook* Puma – Use Spark streaming for stream processing
Better reliability Unified framework and application for both offline, online &
streaming analysis • Use case 4: graph-parallel analysis & machine learning
– Use case: graph algorithms, machine learning (e.g., social network analysis, recommendation engine)
– Similar projects: Google* Pregel, CMU GraphLab*
– Use Bagel (Pregel on Spark) for graph parallel analysis & machine learning on Spark
32
Summary
• MapReduce as implemented in Hadoop* is extremely useful, but:
– In-memory implementations show serious advantages
– Graph algorithms may be more suitable for problem at hand
• Intel continues to work with university researchers
• Intel works to implement research results into production environments
33
Call to Action
• Use Intel Research results in your own big data efforts!
• Work with us on the next-gen, in-memory, real-time analysis using Spark/Shark
34
Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. • A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in
personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.
• Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
• The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
• Intel product plans in this presentation do not constitute Intel plan of record product roadmaps. Please contact your Intel representative to obtain Intel's current plan of record product roadmaps.
• Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to: http://www.intel.com/products/processor_number.
• Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. • Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be
obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm • Intel, Sponsors of Tomorrow and the Intel logo are trademarks of Intel Corporation in the United States and other countries.
• *Other names and brands may be claimed as the property of others. • Copyright ©2013 Intel Corporation.
35
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804
Legal Disclaimer
36
Risk Factors The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from Intel's expectations due to factors including changes in business and economic conditions; customer acceptance of Intel’s and competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and intangible assets. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intel’s current chief executive officer plans to retire in May 2013 and the Board of Directors is working to choose a successor. The succession and transition process may have a direct and/or indirect effect on the business and operations of the company. In connection with the appointment of the new CEO, the company will seek to retain our executive management team (some of whom are being considered for the CEO position), and keep employees focused on achieving the company’s strategic goals and objectives. Intel's results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues, such as the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent Form 10-Q, report on Form 10-K and earnings release. Rev. 1/17/13