In this blog, I will explain how to install and configure Hadoop 2.7.2 on 4-node ubuntu-14.04 clusters. If you want learn more about “what is hadoop and how is it works” then check out the followings:
Prerequisites
cd path/to/Hadoop
mkdir -p mydata/hdfs/namenode
mkdir -p mydata/hdfs/datanode
step4) Edit the following files under path/to/hadoop/hadoop-2.7.2/etc/Hadoop
Thanks...Mahbub
- Apache Hadoop. http://hadoop.apache.org/
- J. Dean, S. Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters,” In OSDI, 2004.
- K. Shvachko, H. Kuang, S. Radia, R. Chansler, “The Hadoop Distributed File System” In IEEE, 2010.
- T. White, Hadoop: The Definitive Guide. O'Reilly Media, Yahoo! Press, June 5, 2009.
Prerequisites
- JDK : JDK 1.7/+ If you are already installed the software then add the following lines on ~/.bashrc on each of the nodes or you can check https://goo.gl/TVjzpY
$gedit ~/.bashrc
export JAVA_HOME=/path/to/jdk_installation_dir/
export PATH=$JAVA_HOME/bin:$PATH - SSH : configure password-less ssh login for each node from each other node.
Let, hostnames of the 4-nodes are master, slave01, slave02 and slave03. First, we will configure the hadoop for master node.
step2) Edit the ~/.bashrc and set the following global variablesexport HADOOP_INSTALL=path/to/hadoop/hadoop-2.7.2
export HADOOP_PREFIX=$HADOOP_INSTALL
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_INSTALL}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export HADOOP_YARN_HOME=$YARN_HOME
step3) Create the following directories under hadoop foldercd path/to/Hadoop
mkdir -p mydata/hdfs/namenode
mkdir -p mydata/hdfs/datanode
step4) Edit the following files under path/to/hadoop/hadoop-2.7.2/etc/Hadoop
(i) hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_65(set the java path)
(ii) core-site.xml
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_65(set the java path)
<configuration>
<property>
<name>fs.defaultFS</name>
<value>master:9000</value>
</property>
<!-- for the cluster -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://master/</value>
<description>NameNode URI</description>
</property>
</configuration>
(iii) mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:54311</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
(iv) hdfs-site.xml: <configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:path/to/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:path/to/mydata/hdfs/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
</configuration>
(v) yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
<property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> <description>The hostname of the RM.</description> </property>
<!-- capacity scheduler specific --> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> <description>Amount of physical memory, in MB, that can be allocated for containers.</description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> </property>
</configuration>
(vi) edit slaves and add the hostnames(or ip-address) of each node.
master
slave01
slave02
slave03
step5: copy the configured hadoop folder in each of the slave node and set the global variables as mentioned in step2.
step6: execute the following commands from master node
$ hdfs namenode -format //this step is to be done only once to format HDFS filesystem.
$ start-all.sh //start the hadoop clusters
$ jps //check jps from master node
SecondaryNameNode
ResourceManager
DataNode
NameNode
NodeManager
$ jps //check jps from slave nodes DataNode NodeManager
SecondaryNameNode
ResourceManager
DataNode
NameNode
NodeManager
$ jps //check jps from slave nodes DataNode NodeManager
$ hadoop job -list-active-trackers
$ stop-all.sh //shutdown the hadoop cluster
Thanks...Mahbub
1 comment:
nice article. thanks.
Post a Comment