single node hadoop cluster setup on cent os-gopal (1)
Post on 02-Jun-2018
219 Views
Preview:
TRANSCRIPT
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
1/11
SINGLE NODE HADOOP CLUSTER SETUP ON CENT OS
- By Gopal Krishna
----------------------------------------------------------
STEP 1: Download VMWare Player or VMWare Workstation
You may download the VMware player from below location.
https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_play
er/5_0
STEP 2: Download CentOS iso DVD version 6.3, which is stable version
For 64 bit windows use below URL
http://mirrors.hns.net.in/centos/6.3/isos/x86_64/
For 32 bit windows, use the below URL
http://wiki.centos.org/Download
STEP 3: Run the VMware player and click on Create New Virtual Machine.
Browse to the isodownloaded in previous step.
CentOS will be installed on local system. You will be asked to enter password
for the root user, enter your root password and do remember it.
After installation is over, you please click on Install to diskicon on your CentOS
Desktop.
Once the installation is over, you please shutdown the machine and log in as root
user.
STEP 4: Installation of VMWare Tools
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
2/11
After installing the VMWare player, you can install VMware tools so that you can share data from
host system to guest system. You can see full system view only if you install VMware tools. Below
are the steps for installing the VMWare tools
Go to VM Menu Item and select Install VMWare Tools shown in the below screenshot:
Go to desktop
cd ~/Desktop
you can untar the tar file
tar -xzf VMwareTools*.tar.gz
cd ~/Desktop/vmware-tools-distrib
Install the vmware tools
./vmware-install.pl
After the above command, if you ask you for some values and the defaults are good
enough. You have to press enter 10-12 times
Enable sharing between the host OS and guest OS as follows:
Go to Virtual Machine Virtual Machine Settings Options Shared Foldersandsee
ifthe share is enabled or not. If sharing is enabled, you can add a new sharelocation
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
3/11
STEP 5: Installation of JAVA
Hadoop needs to have Java installed on your CentOS but CentOS does not come
with
Oracle Java because of licensing issues. So, please use the below commands
to install java.
Download the latest java from blow location. Click on the link Click on
JDK Download button and select the Accept license agreement ratio
button.
http://www.oracle.com/technetwork/java/javase/downloads/index.html
[root@localhost ~]#rpm -Uvh /root/Downloads/jdk-7u15-linux-x64.rpm
[root@localhost ~]#alternatives --install /usr/bin/java java/usr/java/latest/jre/bin/java 20000
[root@localhost ~]#export JAVA_HOME="/usr/java/latest"
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
4/11
Confirm the java path by running[root@localhost ~]#javac version
[root@localhost ~]#javac -versionjavac 1.7.0_15
[root@localhost ~]#
Confirm the java version by running
[root@localhost ~]#java -versionjava version "1.7.0_15"
Java(TM) SE Runtime Environment (build 1.7.0_15-b03)Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)[root@localhost ~]#
If you face any issues while installing the JDK, please check theJDK version you have download and the one you are using in the
command.
If it shows some errors related to missing packages, just ignore them
STEP 6: Confirm your machine name
When you create a new CentOS machine, the default host name is
localhost.localdomain. Check your hostname by giving the following command.
[root@localhost ~]#hostname
You should get localhost.localdomain as output of above command.
STEP 7: Adding a dedicated Hadoop system user
We will use a dedicated Hadoop user account for running Hadoop. While thats notrequired it is recommended because it helps to separate the Hadoop installationfrom other software applications and user accounts running on the same machine
(think: security, permissions, backups, etc).
[root@localhost ~]#groupadd hadoop[root@localhost ~]#useradd hduser -g
hadoop[root@localhost ~]#passwd hduser
It asks for new password, enter again for confirmation and do remember
yourpassword.
Add the hduser to sudoers list so that hduser can do admin
tasks. Step 1.visudo
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
5/11
Step 2: Add a line under ##Allow root to run commands anywhere in theformat.hduser ALL=(ALL) ALL
This will add the user hduser and the group hadoop to your local machine.
Exit from root user and login as hduser to proceed further, typethe following for switching from root to hduser
[root@localhost ~]#s u h d u s e r
STEP 8: Configuring SSH
Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your
local machine if you want to use Hadoop on it (which is what we want to do in this
short tutorial). For our single-node setup of Hadoop, we therefore need toconfigure SSH access to localhost for the hduser user we created in the previous
section.
I assume that you have SSH up and running on your machine and configured it to
allow SSH public key authentication.
Install ssh server on your computer
[hduser@localhost ~]$sudo yum install openssh-server
NOTE:For the above step, INTERNET connection should be enabled
[hduser@localhost ~]$ssh-keygen
Generating public/private rsa key pair. Enter file in which to save the key(/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.The key fingerprint is:
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2hduser@localhostThe key's randomart image is:
+--[ RSA 2048]----+
| |
| |
| . |
| * |
| = S oo |
| . ooOoo. |
| .*E . |
| ..o= |
| ++. |
+-----------------+
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
6/11
[hduser@localhost ~]$
The final step is to test the SSH setup by connecting to your local machine with the
hduser user. The step is also needed to save your local machines host keyfingerprint to the hduser users known_hosts file. If you have any special SSHconfiguration for your local machine like a non-standard SSH port, you can define
host-specific SSH options in $HOME/.ssh/config (see man ssh_config for moreinformation).
#now copy the public key to the authorized_keys file, so that ssh should not require
passwords every time
[hduser@localhost ~]$c a t ~ / . s s h / i d _ r s a . p u b > > ~ / . s s h / a u t h o r i z e d _ k e y s
#Change permissions of the authorized_keys fie to have all permissions for hduser
[hduser@localhost ~]$chmod 700 ~/.ssh/authorized_keys
If ssh is not running, then run it by giving the below command
[hduser@localhost ~]$sudo service sshd startIf ssh is not running, then run it by giving the below command
Run the below command to have the sshd running even after system reboot.
hduser@localhost:~$sudo chkconfig sshd on
Stop the firewalls if enabled by following commands
[hduser@localhost ~]$sudo service iptables stop
Run the below command to have the iptables stopped even after system reboot.
hduser@localhost:~$sudo chkconfig iptables off
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
7/11
Test the ssh connectivity by doing the following
[hduser@localhost ~]$ssh localhost
We should be able to ssh localhost without password prompt. If it asks forpassword while connecting to localhost, there is something went wrong andyou need to fix it before proceeding further.
Without having the password less SSH working properly, the hadoop cluster will notwork, so there is no point in going further if password less SSH is not working. If youface any issues with SSH, consult Google University, where copious help available.
The authenticity of host 'localhost (::1)' can't be established.RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26.
Are you sure you want to continue connecting (yes/no)?yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.Linux ubuntu 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010
i686 GNU/Linux
Ubuntu 10.04 LTS[...snipp...]
Next sections will describe how to setup and run hadoop
STEP 9: Download Hadoop
For this tutorial, I am using Hadoop version 1.0.4, but it should work with any other
version.
Got to the URL http://archive.apache.org/dist/hadoop/core/andclick on hadoop-1.0.4/ and then select hadoop-1.0.4.tar.gz. The file will be saved to/home/hduser/Downloads if you choose defaults. Now perform the following steps toinstall Hadoop on your CentOS.
Copy your downloaded file from Downloads folder to /usr/local folder
$sudo cp /home/hduser/Downloads/hadoop-1.0.4.tar.gz/usr/local
$cd /usr/local$sudo tar -xzf hadoop-1.0.4.tar.gz
$sudo chown -R hduser:hadoop hadoop-1.0.4$sudo ln -s hadoop-1.0.4 hadoop
$sudo chown -R hduser:hadoop hadoop
STEP 10: Add Java location to Hadoop so that it can recognize Java
Add the following to /usr/local/hadoop/conf/hadoop-env.sh. Type the below
command to enter contents into the hadoop-env.sh file
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
8/11
nano /usr/local/hadoop/conf/hadoop-env.sh
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=trueexport HADOOP_HOME_WARN_SUPPRESS="TRUE" export
JAVA_HOME=/usr/java/default
STEP 11: Update $HOME/.bashrc
Add the following lines to the end of the $HOME/.bashrc file of user hduser. If you
use a shell other than bash, you should of course update its appropriateconfiguration files instead of .bashrc.
Give the below commands to edit .bashrc file and paste the below commands andsave the file.
nano ~/.bashrc
# Set Hadoop-related environment variablesexport HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/java/default
# Some convenient aliases and functions for running Hadoop-related commandsunaliasfs&> /dev/nullaliasfs="hadoop fs"unaliashls&> /dev/null
aliashls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:## $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
## Requires installed 'lzop' command.#
lzohead () {hadoopfs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATHexport PATH=$PATH:$HADOOP_HOME/bin:$PATH:$JAVA_HOME/bin
You need to close the terminal and open a new terminal to have the bash changes
into effect.
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
9/11
STEP 12: Create a temporary directory which will be used as base location for DFS.
Now we create the directory and set the required ownerships and permissions:
$ sudo mkdir -p /app/hadoop/tmp$ sudo chown -R hduser:hadoop /app/hadoop/tmp$ sudo chmod -R 750 /app/
$ sudo chmod -R 750 /app/hadoop$ sudo chmod -R 750 /app/hadoop/tmp
If you forget to set the required ownerships and permissions, you will see ajava.io.IOException when you try to format the name node in the next section).
STEP 13: core-site.xml file updating
Add the following snippets between the ... tagsin/usr/local/hadoop/conf/core-site.xml:
Add the following snippets between the ... tags in
/usr/local/hadoop/conf/core-site.xml:
nano /usr/local/hadoop/conf/core-site.xml
hadoop.tmp.dir
/app/hadoop/tmp
A base for other temporary directories.
fs.default.namehdfs://localhost:54310
The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. Theuri's scheme determines the config property (fs.SCHEME.impl) naming
theFileSystem implementation class. The uri's authority is used todetermine the host, port, etc. for a filesystem.
STEP 14: mapred-site.xml file updating
Add the following to /usr/local/hadoop/conf/mapred-site.xml between ...
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
10/11
nano /usr/local/hadoop/conf/mapred-site.xml
mapred.job.trackerlocalhost:54311
The host and port that the MapReduce jobtracker runs at. If "local", then jobs are run in-process as a
single map
and reduce task.
STEP 15: hdfs-site.xml file updating
Add the following to /usr/local/hadoop/conf/hdfs-site.xml between
...
nano /usr/local/hadoop/conf/hdfs-site.xml
dfs.replication
1Default block replication.The actual number of replications can be specified when the file is created.The default is used if replication is not specified in create time.
STEP 16: Format the Name Node
Format hdfs cluster with below command
[hduser@localhost ~]$hadoop namenode -format
If the format is not working, double check your entries in .bashrc file. The .bashrc
updating come into force only if you have opened a new terminal
-
8/10/2019 Single Node Hadoop Cluster Setup on Cent Os-gopal (1)
11/11
STEP 17: Starting single-node cluster
Congratulations, your Hadoop single node cluster is ready to use. Test your cluster by
running the following commands.
[hduser@localhost ~]$start-all.sh
If you get any SSH related error like connection refused etc, ensure that your ssh
service is running by checking with below command and run it if needed.
[hduser@localhost ~]$sudo service sshd status
A command line utility to check whether all the hadoop demons are running or not is:
jps
Give the jps at command prompt and you should see something like below.[hduser@localhost ~]$jps
9168 Jps
9127 TaskTracker
8824 DataNode
8714 NameNode
8935 SecondaryNameNode
9017 JobTracker
Check if the hadoop is accessible through browser by hitting the below URLs
For mapreduce - http://localhost:50030
For hdfs -http://localhost:50070
top related