apache hadoop and hive

Upload: srikanth

Post on 09-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

Hadoop Online Training : kelly technologies is the bestHadoop online Training Institutes in Bangalore. ProvidingHadoop online Training by real time faculty in Bangalore.

TRANSCRIPT

  • PresentedBy

  • Architecture of Hadoop Distributed File SystemHadoop usage at FacebookIdeas for Hadoop related researchwww.kellytechno.com

  • Hadoop DeveloperCore contributor since Hadoops infancyProject Lead for Hadoop Distributed File SystemFacebook (Hadoop, Hive, Scribe)Yahoo! (Hadoop in Yahoo Search)Veritas (San Point Direct, Veritas File System)IBM Transarc (Andrew File System)UW Computer Science Alumni (Condor Project)

    www.kellytechno.com

  • Need to process Multi Petabyte DatasetsExpensive to build reliability in each application.Nodes fail every day Failure is expected, rather than exceptional. The number of nodes in a cluster is not constant.Need common infrastructure Efficient, reliable, Open Source Apache LicenseThe above goals are same as Condor, butWorkloads are IO bound and not CPU boundwww.kellytechno.com

  • Need a Multi Petabyte WarehouseFiles are insufficient data abstractionsNeed tables, schemas, partitions, indicesSQL is highly popularNeed for an open data format RDBMS have a closed data format flexible schemaHive is a Hadoop subproject!www.kellytechno.com

  • Dec 2004 Google GFS paper publishedJuly 2005 Nutch uses MapReduceFeb 2006 Becomes Lucene subprojectApr 2007 Yahoo! on 1000-node clusterJan 2008 An Apache Top Level ProjectJul 2008 A 4000 node test clusterSept 2008 Hive becomes a Hadoop subproject

    www.kellytechno.com

  • Amazon/A9FacebookGoogleIBMJoostLast.fmNew York TimesPowerSetVeohYahoo!www.kellytechno.com

  • Typically in 2 level architecture Nodes are commodity PCs 30-40 nodes/rack Uplink from rack is 3-4 gigabit Rack-internal is 1 gigabitwww.kellytechno.com

  • Very Large Distributed File System 10K nodes, 100 million files, 10 PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recovers from themOptimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidthUser Space, runs on heterogeneous OS www.kellytechno.com

  • SecondaryNameNodeClient

    HDFS ArchitectureNameNodeDataNodes1. filename

    2. BlckId, DataNodeso3.Read dataCluster MembershipCluster MembershipNameNode : Maps a file to a file-id and list of MapNodesDataNode : Maps a block-id to a physical location on diskSecondaryNameNode: Periodic merge of Transaction logwww.kellytechno.com

  • Single Namespace for entire clusterData Coherency Write-once-read-many access model Client can only append to existing files Files are broken up into blocks Typically 128 MB block size Each block replicated on multiple DataNodesIntelligent Client Client can find location of blocks Client accesses data directly from DataNodewww.kellytechno.com

  • www.kellytechno.com

  • Meta-data in Memory The entire metadata is in main memory No demand paging of meta-dataTypes of Metadata List of files List of Blocks for each file List of DataNodes for each block File attributes, e.g creation time, replication factorA Transaction Log Records file creations, file deletions. etcwww.kellytechno.com

  • A Block Server Stores data in the local file system (e.g. ext3) Stores meta-data of a block (e.g. CRC) Serves data and meta-data to ClientsBlock Report Periodically sends a report of all existing blocks to the NameNodeFacilitates Pipelining of Data Forwards data to other specified DataNodeswww.kellytechno.com

  • Current Strategy-- One replica on local node-- Second replica on a remote rack-- Third replica on same remote rack-- Additional replicas are randomly placedClients read from nearest replicaWould like to make this policy pluggablewww.kellytechno.com

  • Use Checksums to validate data Use CRC32File Creation Client computes checksum per 512 byte DataNode stores the checksumFile access Client retrieves the data and checksum from DataNode If Validation fails, Client tries other replicaswww.kellytechno.com

  • A single point of failureTransaction Log stored in multiple directories A directory on the local file system A directory on a remote file system (NFS/CIFS)Need to develop a real HA solutionwww.kellytechno.com

  • Client retrieves a list of DataNodes on which to place replicas of a blockClient writes block to the first DataNodeThe first DataNode forwards the data to the next DataNode in the PipelineWhen all replicas are written, the Client moves on to write the next block in filewww.kellytechno.com

  • Goal: % disk full on DataNodes should be similarUsually run when new DataNodes are addedCluster is online when Rebalancer is activeRebalancer is throttled to avoid network congestionCommand line toolwww.kellytechno.com

  • The Map-Reduce programming model Framework for distributed processing of large data sets Pluggable user code runs in generic frameworkCommon design pattern in data processing cat * | grep | sort | unique -c | cat > file input | map | shuffle | reduce | outputNatural for: Log processing Web search indexing Ad-hoc querieswww.kellytechno.com

  • Production cluster4800 cores, 600 machines, 16GB per machine April 20098000 cores, 1000 machines, 32 GB per machine July 20094 SATA disks of 1 TB each per machine2 level network hierarchy, 40 machines per rackTotal cluster size is 2 PB, projected to be 12 PB in Q3 2009

    Test cluster800 cores, 16GB each

    www.kellytechno.com

  • Web ServersScribe ServersNetwork StorageHadoop ClusterOracle RACMySQLwww.kellytechno.com

  • Statistics :15 TB uncompressed data ingested per day55TB of compressed data scanned per day3200+ jobs on production cluster per day80M compute minutes per dayBarrier to entry is reduced:80+ engineers have run jobs on Hadoop platformAnalysts (non-engineers) starting to use Hadoop through Hive

    www.kellytechno.com

  • Ideas for Collaborationwww.kellytechno.com

  • Run Condor jobs on Hadoop File SystemCreate HDFS using local disk on condor nodesUse HDFS API to find data locationPlace computation close to data location

    Support map-reduce data abstraction modelwww.kellytechno.com

  • Power ManagementMajor operating expensePower down CPUs when idleBlock placement based on access patternMove cold data to disks that need less powerCondor Green

    www.kellytechno.com

  • Design Quantitative BenchmarksMeasure Hadoops fault toleranceMeasure Hives schema flexibilityCompare above benchmark resultswith RDBMSwith other grid computing engineswww.kellytechno.com

  • Current state of affairsFIFO and Fair Share schedulerCheckpointing and parallelism tied togetherTopics for ResearchCycle scavenging schedulerSeparate checkpointing and parallelismUse resource matchmaking to support heterogeneous Hadoop compute clustersScheduler and API for MPI workload

    www.kellytechno.com

  • Machines and software are commodityNetworking components are notHigh-end costly switches neededHadoop assumes hierarchical topologyDesign new topology based on commodity hardware

    www.kellytechno.com

  • Hadoop Log AnalysisFailure prediction and root cause analysisHadoop Data RebalancingBased on access patterns and loadBest use of flash memory?www.kellytechno.com

  • Lots of synergy between Hadoop and CondorLets get the best of both worldswww.kellytechno.com

  • HDFS Design: http://hadoop.apache.org/core/docs/current/hdfs_design.htmlHadoop API: http://hadoop.apache.org/core/docs/current/api/Hive: http://hadoop.apache.org/hive/www.kellytechno.com

  • ThankyouPresentedBy

    www.kellytechno.com

    **This is the architecture of our backend data warehouing system. This system provides important information on the usage of our website, including but not limited to the number page views of each page, the number of active users in each country, etc.

    We generate 3TB of compressed log data every day. All these data are stored and processed by the hadoop cluster which consists of over 600 machines. The summary of the log data is then copied to Oracle and MySQL databases, to make sure it is easy for people to access.

    *