hadoop distributed file systemsdevans/7343... · a distributed file system (dfs) is a file system...

Post on 08-Aug-2020

6 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Distributed File Systems &Hadoop

Kevin Queenan

What is a Distributed File System (DFS)?

Simply...

A distributed file system (DFS) is a file system with data stored on a server. The data is accessed and processed as if it was stored on the local client machine.

What is Hadoop?

Apache Hadoop is...

A framework, ecosystem, or set of open-source software tools that allows for the distributed housing and processing of extremely large data sets contained across numerous clusters of commodity grade hardware.

Why does Hadoop exist?

Consider current industry trends...

Data at a massive scale -> TB and PB

Facebook ingested 20 TB of data per day in 2011

NYSE generated 1TB of data per day in 2010

This data is also heterogeneous:

Images, social network activity, log files, IOT sensors, etc

TB and PB

80% unstructured20% structuredHeterogeneous data consisting of log files, audio, video, images, etc

Good, bad, undefined, incomplete?

Time sensitive, real-time, etc

Challenge: Read 1TB of data

1 machine

4 I/O channels

Each channel operates @ 100 MB/s

Time taken?

45 minutes

10 machines

4 I/O channels

Each channel operates @ 100 MB/s

Time taken?

4.5 minutes

Where was Hadoop developed?

Hadoop Origins

Three Google white papers:1. GFS2. MapReduce3. BigTable

HDFS

MapReduce

HBase

Hadoop is the faithful, open-source implementation of Google’s MapReduce, GFS, and BigTable

Hadoop’s primary architect is Doug Cutting who is also credited with creating Apache Lucene

The project began while Doug Cutting was working for Yahoo! on a project named Nutch

Cutting’s son named a yellow stuffed elephant Hadoop which Doug adopted for the project

Hadoop’s Design Axioms

1. Store and process massive amounts of data (order of PB)2. Performance must scale linearly3. Failure is expected4. Easily manageable 5. Self-healing file system6. Run on commodity, off-the-shelf hardware

Fundamental tenet of relational databases involves a db schema -> inherently structured

What about the massive amount of unstructured data we need to house and process?

Scaling commercial relational databases is incredibly expensive and limited

Hadoop cost per user is approx $250/TB

RDBMS cost per user is approx $100,000 - $200,000/TB

Hadoop vs RDBMS

Hadoop Architecture

Master/Slave Model

Master

NameNode (HDFS)

JobTracker (MapReduce)

Slave

DataNode (HDFS)

TaskTracker (MapReduce)

NameNodeFile metadata:/user/kevin/data1.txt -> 1,2,3

r = 3

hdfs-site.xml

DataNode

2, 3

DataNode

1, 3

DataNode

1, 2, 3

DataNode

1, 2

Underlying Filesystem

Each physical drive in each slave DataNode machine is formatted either ext3 or ext4

HDFS can be considered to be an abstract filesystem in the sense that fixed blocks of data are sent to slave DataNodes from the master NameNode

MapReduce

Data Processing Paradigm

MapReduce is a framework for performing high performance distributed data processing using the divide and aggregate programming paradigm

Thanks for your time!

top related