odi11g, hadoop and "big data" sources

38
T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com ODI11g, Hadoop and “Big Data” Mark Rittman, Technical Director, Rittman Mead Rittman Mead BI Forum 2013, Brighton & Atlanta T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com Wednesday, 8 May 13

Upload: mark-rittman

Post on 26-Jan-2015

112 views

Category:

Technology


4 download

DESCRIPTION

Presentation from the Rittman Mead BI Forum 2013 on ODI11g's Hadoop connectivity. Provides a background to Hadoop, HDFS and Hive, and talks about how ODI11g, and OBIEE 11.1.1.7+, uses Hive to connect to "big data" sources.

TRANSCRIPT

Page 1: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

ODI11g, Hadoop and “Big Data”Mark Rittman, Technical Director, Rittman MeadRittman Mead BI Forum 2013, Brighton & Atlanta

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Wednesday, 8 May 13

Page 2: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Big Data, Hadoop and Unstructured Data Sources

• “Big data” is the hot topic in BI, DW and Analytics circles• The ability to harness vast datasets, at a highly-granular level, by harnessing massively-parallel computing• Crunching loosely-structured and modelled datasets using simple algorithms: Map (project) + Reduce (agg)• Largely based around open-source projects, non-relational technologies‣Apache Hadoop‣MapReduce‣Hadoop Distributed File System‣Apache Hive, Sqoop, HBase etc• Emerging commercial vendors‣Cloudera‣Hortonworks etc• Can be used standalone, or linked to an

enterprise DW/BI architecture+

Wednesday, 8 May 13

Page 3: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Oracle’s Strategy for Business Analytics

• Connect to all of your data, from all your sources,• Subject it to the full range of possible inquiry• Package solutions for known problems and fixed sources, and• Deploy to PCs and mobile devices, on premise or in the cloud

On Premise,On Cloud,On Mobile

Any Data,Any Source

Full Range ofAnalytics

IntegratedAnalytic Apps

Wednesday, 8 May 13

Page 4: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Connect to All of Your Data, From All of Your Sources

• As well as traditional application and database files sources, unstructured sourceand “big data” sources are within scope for business decision-making‣Data of great volume, great velocity and great variety

Any Data,Any Source

Your Data :

Decisions based onyour data

Big Data :

Decisions based onall data relevant to you

Transactions

Documents& Social Data

Machine-GeneratedData

Wednesday, 8 May 13

Page 5: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Oracle’s Big Data Products

•Oracle Big Data Appliance - Engineered System for Big Data Acquisition and Processing‣Cloudera Distribution of Hadoop‣Cloudera Manager‣Open-source R‣Oracle NoSQL Database Community Edition‣Oracle Enterprise Linux + Oracle JVM•Oracle Big Data Connectors‣Oracle Loader for Hadoop (Hadoop > Oracle RBDMS)‣Oracle Direct Connector for HDFS (HFDS > Oracle RBDMS)‣Oracle Data Integration Adapter for Hadoop‣Oracle R Connector for Hadoop•Oracle NoSQL Database (column/key-store DB based on BerkeleyDB)

Wednesday, 8 May 13

Page 6: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

ODI as Part of Oracle’s Big Data Strategy

• ODI is the data integration tool for extracting data from Hadoop/MapReduce, and loading into Oracle Big Data Appliance, Oracle Exadata and Oracle Exalytics• Oracle Application Adaptor for Hadoop provides required data adapters‣ Load data into Hadoop from local filesystem,

or HDFS (Hadoop clustered FS)‣Read data from Hadoop/MapReduce using

Apache Hive (JDBC) and HiveQL, loadinto Oracle RDBMS usingOracle Loader for Hadoop

• Supported by Oracle’s Engineered Systems‣Exadata‣Exalytics‣Big Data Appliance (w/Cloudera Hadoop Distrib)

Wednesday, 8 May 13

Page 7: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

How ODI Accesses Hadoop and MapReduce

• ODI accesses data in Hadoop clusters through Apache Hive‣Metadata and query layer over MapReduce‣Provides SQL-like language (HiveQL) and a

metadata store (data dictionary)‣Provides a means to define “tables”, into which file

data is loaded, and then queried via MapReduce‣Accessed via Hive JDBC driver

(separate Hadoop install requiredon ODI server, for client libs)

• Additional access throughOracle Direct Connector for HDFSand Oracle Loader for Hadoop

Hadoop Cluster

Hive Server

ODI 11gOracle RDBMS

HiveQL

MapReduce

Direct-path loads using Oracle Loader for Hadoop, transformation logic in MapReduce

Wednesday, 8 May 13

Page 8: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Oracle Business Analytics and Big Data Sources

• OBIEE 11g, and other Oracle Business Analytics tools, can also make use of big data sources‣ Oracle Exalytics, through in-memory aggregates and InfiniBand connection to Exadata, can analyze vast (structured)

datasets held in relational and OLAP databases‣ Endeca Information Discovery can analyze unstructured and semi-structured sources‣ InfiniBand connector to Big Data Applicance + Hadoop connector in OBIEE supports analysis via Map/Reduce‣ Oracle R distribution + Oracle Enterprise R supports SAS-style statistical analysis

of large data sets, as part of Oracle Advanced Analytics Option‣ OBIEE can access Hadoop

datasource through anotherApache technology called Hive

Wednesday, 8 May 13

Page 9: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

OBIEE Access to Hadoop/Hive for BI Administration Tool RPD Creation

• HiveODBC driver has to be installed into Windows environment, so that BI Administration tool can connect to Hive and return table metadata• Import as ODBC datasource, change physical DB type to Apache Hadoop afterwards• Note that OBIEE queries cannot span >1 Hive schema (no table prefixes)

Wednesday, 8 May 13

Page 10: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Set up ODBC Connection at the OBIEE Server (Linux Only)

• OBIEE 11.1.1.7+ ships with HiveODBC drivers, need to use 7.x versions though• Configure the ODBC connection in odbc.ini, name needs to match RPD ODBC name• BI Server should then be able to connect to the Hive server, and Hadoop/MapReduce

[ODBC Data Sources]AnalyticsWeb=Oracle BI ServerCluster=Oracle BI ServerSSL_Sample=Oracle BI Serverbigdatalite=Oracle 7.1 Apache Hive Wire Protocol

[bigdatalite]Driver=/u01/app/Middleware/Oracle_BI1/common/ODBC/ Merant/7.0.1/lib/ARhive27.soDescription=Oracle 7.1 Apache Hive Wire ProtocolArraySize=16384Database=defaultDefaultLongDataBuffLen=1024EnableLongDataBuffLen=1024EnableDescribeParam=0Hostname=bigdataliteLoginTimeout=30MaxVarcharSize=2000PortNumber=10000RemoveColumnQualifiers=0StringDescribeType=12TransactionMode=0UseCurrentSchema=0

Wednesday, 8 May 13

Page 11: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Opportunities for OBIEE and ODI with Big Data Sources and Tools

•Load data from a Hadoop/HDFS/NoSQL environment into a structured DW for analysis•Provide OBIEE as an alternative to

Java coding or HiveQL for analysts•Leverage Hadoop & HDFS for

massively-parallel staging-layernumber crunching•Make use of low-cost, fault-tolerant

hardware for parts of your BI platform•Provide the reporting and analysis

for customers who have boughtOracle Big Data Appliance

Wednesday, 8 May 13

Page 12: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

What is Hadoop?

•Apache Hadoop is one of the most well-known Big Data technologies•Family of open-source products used to store, and analyze distributed datasets•Hadoop is the enabling framework, automatically parallelises and co-ordinates jobs‣ “Moves the compute to the data”•MapReduce is the programming framework

for filtering, sorting and aggregating data‣Map : filter and interpret input data, create key/value pairs‣Reduce : summarise and aggregate•MapReduce jobs can be written in any

language (Java etc), but it is complicated

Wednesday, 8 May 13

Page 13: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

What is HDFS?

•The filesystem behind Hadoop, used to store data for Hadoop analysis‣Unix-like, uses commands such as ls, mkdir, chown, chmod•Fault-tolerant, with rapid fault detection and recovery•High-throughput, with streaming data access and large block sizes•Designed for data-locality, placing data closed to where it is processed•Accessed from the command-line, via internet (hdfs://), GUI tools etc

[oracle@bigdatalite mapreduce]$ hadoop fs -mkdir /user/oracle/my_stuff[oracle@bigdatalite mapreduce]$ hadoop fs -ls /user/oracleFound 5 itemsdrwx------ - oracle hadoop 0 2013-04-27 16:48 /user/oracle/.stagingdrwxrwxrwx - oracle hadoop 0 2012-09-18 17:02 /user/oracle/moviedemodrwxrwxrwx - oracle hadoop 0 2012-10-17 15:58 /user/oracle/movieworkdrwxr-xr-x - oracle hadoop 0 2013-05-03 17:49 /user/oracle/my_stuffdrwxr-xr-x - oracle hadoop 0 2012-08-10 16:08 /user/oracle/stage

Wednesday, 8 May 13

Page 14: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Hive as the Hadoop “Data Warehouse”

•MapReduce jobs are typically written in Java, but Hive can make this simpler•Hive is a query environment over Hadoop/MapReduce to support SQL-like queries•Hive server accepts HiveQL queries via HiveODBC or HiveJDBC, automatically

creates MapReduce jobs against data previously loaded into the Hive HDFS tables•Approach used by ODI and OBIEE

to gain access to Hadoop data•Allows Hadoop data to be accessed just like

any other data source (sort of...)

Wednesday, 8 May 13

Page 15: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Hive Data and Metadata

•Hive uses a RBDMS metastore to holdtable and column definitions in schemas•Hive tables then map onto HDFS-stored files‣Managed tables‣External tables•Oracle-like query optimizer, compiler,

executor•JDBC and OBDC drivers,

plus CLI etc

Hive Driver(Compile

Optimize, Execute)

Managed Tables

/user/hive/warehouse/

External Tables

/user/oracle//user/movies/data/

HDFS

HDFS or local files loaded into Hive HDFS

area, using HiveQLCREATE TABLE

command

HDFS files loaded into HDFSusing external process, then

mapped into Hive usingCREATE EXTERNAL TABLE

command

Metastore

Wednesday, 8 May 13

Page 16: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Transforming HiveQL Queries into MapReduce Jobs

•HiveQL queries are automatically translated into Java MapReduce jobs•Selection and filtering part becomes Map tasks•Aggregation part becomes the Reduce tasks

SELECT a, sum(b)FROM myTableWHERE a<100

GROUP BY a

MapTask

MapTask

MapTask

ReduceTask

ReduceTask

Result

Wednesday, 8 May 13

Page 17: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

An example Hive Query Session: Connect and Display Table List

[oracle@bigdatalite ~]$ hiveHive history file=/tmp/oracle/hive_job_log_oracle_201304170403_1991392312.txt

hive> show tables;OKdwh_customerdwh_customer_tmpi_dwh_customerratingssrc_customersrc_sales_personweblogweblog_preprocessedweblog_sessionizedTime taken: 2.925 seconds

Hive Server lists out all “tables” that have been defined within the Hiveenvironment

Wednesday, 8 May 13

Page 18: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

An example Hive Query Session: Display Table Row Count

hive> select count(*) from src_customer;

Total MapReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes):set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers:set hive.exec.reducers.max=In order to set a constant number of reducers:set mapred.reduce.tasks=Starting Job = job_201303171815_0003, Tracking URL = http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201303171815_0003Kill Command = /usr/lib/hadoop-0.20/bin/ hadoop job -Dmapred.job.tracker=localhost.localdomain:8021 -kill job_201303171815_0003

2013-04-17 04:06:59,867 Stage-1 map = 0%, reduce = 0%2013-04-17 04:07:03,926 Stage-1 map = 100%, reduce = 0%2013-04-17 04:07:14,040 Stage-1 map = 100%, reduce = 33%2013-04-17 04:07:15,049 Stage-1 map = 100%, reduce = 100%Ended Job = job_201303171815_0003OK

25Time taken: 22.21 seconds

Request count(*) from table

Hive server generates MapReduce job to “map” table key/value pairs, and then reduce the results to table count

MapReduce job automatically run by Hive Server

Results returned to user

Wednesday, 8 May 13

Page 19: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

OBIEE and ODI Access to Hive, Leveraging MapReduce with no Java Coding

•Requests in HiveQL arrive via HiveODBC, HiveJDBCor through the Hive command shell•JDBC and ODBC access requires Thift server‣Provides RPC call interface over Hive for external procs•All queries then get parsed, optimized and compiled, then

sent to Hadoop NameNode and Job Tracker•Then Hadoop processes the query, generating MapReduce

jobs and distributing it to run in parallel across all data nodes•Hadoop access can still be performed procedurally if needed,

typically coded by hand in Java, or through Pig, etc‣The equivalent of PL/SQL compared to SQL‣But Hive works well with the OBIEE/ODI paradigm

Wednesday, 8 May 13

Page 20: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Complementary Technologies: HDFS, Cloudera Manager, Hue, Beeswax etc

•You can download your own Hive binaries, libraries etc from Apache Hadoop website•Or use pre-built VMs and distributions from the likes of Cloudera‣Cloudera CDH3/4 is used on Oracle Big Data Appliance‣Open-source + proprietary tools (Cloudera Manager)•Other tools for managing Hive, HFDS etc including‣Hue (HDFS file browser + management)‣Beeswax (Hive administration + querying)•Other complementary/required Hadoop tools‣Sqoop‣HDFS‣Thrift

Wednesday, 8 May 13

Page 21: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

DemonstrationSimple Data Selection and Querying using Hive on Cloudera CDH3

Wednesday, 8 May 13

Page 22: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

ODI + Big Data Examples : Providing the Bridge Between Hadoop + OBIEE

•OBIEE now has the ability to reportagainst Hadoop data, via Hive‣Assumes that data is already loaded

into the Hive warehouse tables•ODI therefore can be used to load

the Hive tables, through either:‣Loading Hive from files‣Joining and loading from Hive-Hive‣Loading and transforming via

shell scripts (python, perl etc)•ODI could also extract the Hive data

and load into Oracle, if more appropriate

Wednesday, 8 May 13

Page 23: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Configuring ODI 11.1.1.6+ for Hadoop Connectivity

•Obtain an installation of Hadoop/Hive from somewhere (Cloudera CDH3/4 for example)•Copy the following files into a temp directory, archive and transfer to ODI environment

for example...

•Copy JAR files into userlib directory and (standalone) agent lib directory

•Restart ODI Studio

$HIVE_HOME/lib/*.jar$HADOOP_HOME/hadoop-*-core*.jar,$HADOOP_HOME/Hadoop-*-tools*.jar

/usr/lib/hive/lib/*.jar/usr/lib/hadoop-0.20/hadoop-*-core*.jar,/usr/lib/hadoop-0.20/Hadoop-*-tools*.jar

c:\Users\Administrator\AppData\Roaming\odi\oracledi\userlib

Wednesday, 8 May 13

Page 24: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Registering HDFS and Hive Sources and Targets in the ODI Topology

•For Hive sources and targets, use Hive technology‣JDBC Driver : Apache Hive JDBC Driver‣JDBC URL : jdbc:hive://[server_name]:10000/default‣ (Flexfield Name) Hive Metastore URIs : thrift://[server_name]:10000

•For HFDS sources, use File technology‣JDBC URL : hdfs://[server_name]:port‣Special HDFS “trick” to use File tech

(no specific HDFS technology)

Wednesday, 8 May 13

Page 25: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Reverse Engineering Hive, HDFS and Local File Datastores + Models

•Hive tables reverse-engineer just like regular tables•Define model in Designer navigator, uses Hive RKM to retrieve table metadata• Information on Hive-specific metadata stored in flexfields‣Hive Buckets‣Hive Partition Column‣Hive Cluster Column‣Hive Sort Column

Wednesday, 8 May 13

Page 26: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

DemonstrationODI 11.1.1.6 Configured for Hadoop Access, with Hive/HFDS source and targets registered

Wednesday, 8 May 13

Page 27: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

ODI Application Adapter for Hadoop KMs

•Application Adapter (pay-extra option) for Hadoop connectivity•Works for both Windows and Linux installs of ODI Studio‣Need to source HiveJDBC drivers and JARs from separate Hadoop install•Provides six new knowledge modules‣ IKM File to Hive (Load Data)‣ IKM Hive Control Append‣ IKM Hive Transform‣ IKM File-Hive to Oracle (OLH)‣CKM Hive‣RKM Hive

Wednesday, 8 May 13

Page 28: ODI11g, Hadoop and "Big Data" Sources

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Oracle Loader for Hadoop

• Oracle technology for accessing Hadoop data, and loading it into an Oracle database• Pushes data transformation, “heavy lifting” to the Hadoop cluster, using MapReduce• Direct-path loads into Oracle Database, partitioned and non-partitioned• Online and offline loads• Key technology for fast load of

Hadoop results into Oracle DB

Wednesday, 8 May 13

Page 29: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

IKM File to Hive (Load Data): Loading of Hive Tables from Local File or HDFS

•Uses the Hive Load Data command to load from local or HDFS files‣Calls Hadoop FS commands for simple

copy/move into/around HDFS‣Commands generated by ODI through

IKM File to Hive (Load Data)

hive> load data inpath '/user/oracle/movielens_src/u.data'> overwrite into table movie_ratings;

Loading data to table default.movie_ratingsDeleted hdfs://localhost.localdomain/user/hive/warehouse/movie_ratings

OKTime taken: 0.341 seconds

Wednesday, 8 May 13

Page 30: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

IKM File to Hive (Load Data): Loading of Hive Tables from Local File or HDFS

• IKM File to Hive (Load Data) generates therequired HiveQL commands using a script template•Executed over HiveJDBC interface•Success/Failure/Warning returned to ODI

Wednesday, 8 May 13

Page 31: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Load Data and Hadoop SerDe (Serializer-Deserializer) Transformations

•Hadoop SerDe transformations can be accessed, for example to transform weblogs•Hadoop interface that contains:‣Deserializer - converts incoming data

into Java objects for Hive manipulation‣Serializer - takes Hive Java objects &

converts to output for HDFS•Library of SerDe transformations readily

available for use with Hive•Use the OVERRIDE_ROW_FORMAT

option in IKM to override regular columnmappings in Mapping tab

Wednesday, 8 May 13

Page 32: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

IKM Hive Control Append: Loading, Joining & Filtering Between Hive Tables

•Hive source and target, transformations according to HiveQLfunctionality (aggregations, functions etc)•Ability to join data sources•Other data sources can be used,

but will involve staging tables and additional KMs (as per any multi-source join)

Wednesday, 8 May 13

Page 33: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

IKM Hive Transform: Use Custom Shell Scripts to Integrate into Hive Table

•Gives developer the abilityto transform data programmatically usingPython, Perl etc scripts•Options to map output

of script to columns inHive table•Useful for more

programmatic and complexdata transformations

Wednesday, 8 May 13

Page 34: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

IKM File-Hive to Oracle: Extract from Hive into Oracle Tables

•Uses Oracle Loaded for Hadoop (OLH) to processany filtering, aggregation, transformation in Hadoop,using MapReduce •OLH part of Oracle Big Data Connectors (additional cost)•High-performance loader into Oracle DB•Optional sort by primary key, pre-partioning of data•Can utilise the two OLH loading modes:‣JDBC or OCI direct load into Oracle‣Unload to files, Oracle DP into Oracle DB

Wednesday, 8 May 13

Page 35: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

DemonstrationData Integration Tasks using ODIAAH Hadoop KMs

Wednesday, 8 May 13

Page 36: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

NoSQL Data Sources and Targets with ODI 11g

•No specific technology or driver for NoSQL databases, but can use Hive external tables•Requires a specific “Hive Storage Handler” for key/value store sources‣Hive feature for accessing data from other DB systems, for example MongoDB, Cassandra‣For example, https://github.com/vilcek/HiveKVStorageHandler•Additionally needs Hive collect_set aggregation method to aggregate results‣Has to be defined in Languages panel in Topology

Wednesday, 8 May 13

Page 37: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

Pig, Sqoop and other Hadoop Technologies, and Hive

•Future versions of ODI might use other Hadoop technologies‣Apache Sqoop for bulk transfer between Hadoop and RBDMSs•Other technologies are not such an obvious fit‣Apache Pig - the equivalent of PL/SQL for Hive’s SQL•Commercial vendors may produce “better” versions of Hive, MapReduce etc‣Cloudera Impala - more “real-time” version of Hive‣MapR - solves many current issues with MapReduce, 100% Hadoop API compatibility•Watch this space...!

Wednesday, 8 May 13

Page 38: ODI11g, Hadoop and "Big Data" Sources

T : +1 (888) 631-1410 E : [email protected] W: www.rittmanmead.com

ODI11g, Hadoop and “Big Data”Mark Rittman, Technical Director, Rittman MeadRittman Mead BI Forum 2013, Brighton & Atlanta

T : +44 (0) 8446 697 995 E : [email protected] W: www.rittmanmead.com

Wednesday, 8 May 13