分佈式Hadoop集羣安裝配置

1 前期準備,配置所有的機器的/etc/hosts 文件
# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       rac localhost
::1             rac3 localhost
10.250.7.225    rac1
10.250.7.249    rac2
10.250.7.241    rac3
10.250.7.220    rac4 
1 獲取hadoop安裝文件
可以從一下網址獲取hadoop 安裝文件 hadoop-0.20.2.tar.gz:
http://apache.etoak.com//hadoop/common/ 
http://mirror.bjtu.edu.cn/apache//hadoop/common/ 
http://labs.renren.com/apache-mirror//hadoop/common/ 

#tar zxvf hadoop-0.20.2.tar.gz
#mv hadoop-0.20.2.tar.gz hadoop
#cd hadoop/conf
配置conf文件夾裏面的core-site.xml,hdfs-site.xml,mapread-site.xml,mapred-site.xml,hadoop-env.sh。具體各個配置的含義請參考Hadoop幫助文檔。
core-site.xml文件
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
    <name>fs.default.name</name>
    <value>hdfs://rac2:9000</value>
 </property>
</configuration>
2.其次編輯所有節點的hdfs-site.xml,命令如下:
#vi /root/hadoop/conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
 <property>
 <name>dfs.name.dir</name>
 <value>/opt/hadoop/NameData</value>
 </property>
<property>
 <name>dfs.permissions</name>
 <value>false</value>
 </property>
 <property>
 <name>dfs.replication</name>
 <value>1</value>
 </property>
</configuration>
3 編輯所有節點mapred-site.xml文件:
#vi /home/hadoop/conf/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.0.10:9001</value>
</property>
</configuration>

4 在所有節點編輯hadoop-env.sh 
export HADOOP_HOME=/root/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/conf 
export PATH=$PATH:$HADOOP_HOME/bin 
export JAVA_HOME=/usr/java/jdk1.6.0_29 
export CLASSHOME=/usr/java/jdk1.6.0_29/lib/tools.jar:/usr/java/jdk1.6.0_29/lib/dt.jar
5 在所有節點修改masters ,slaves 文件,其中masters 文件寫入master 節點的ip,slaves 文件寫入slaves 節點的ip
[root@rac1 conf]#  cat masters 
10.250.7.225
[root@rac1 conf]# cat slaves 
10.250.7.220
10.250.7.249
10.250.7.241
6 啓動hadoop
[root@rac1 bin]# sh start-all.sh 
starting namenode, logging to /root/hadoop/logs/hadoop-root-namenode-rac1.out
10.250.7.220: starting datanode, logging to /root/hadoop/logs/hadoop-root-datanode-rac4.out
10.250.7.241: starting datanode, logging to /root/hadoop/logs/hadoop-root-datanode-rac3.out
10.250.7.249: starting datanode, logging to /root/hadoop/logs/hadoop-root-datanode-rac2.out
10.250.7.225: starting secondarynamenode, logging to /root/hadoop/logs/hadoop-root-secondarynamenode-rac1.out
jobtracker running as process 20175. Stop it first.
10.250.7.220: starting tasktracker, logging to /root/hadoop/logs/hadoop-root-tasktracker-rac4.out
10.250.7.241: starting tasktracker, logging to /root/hadoop/logs/hadoop-root-tasktracker-rac3.out
10.250.7.249: starting tasktracker, logging to /root/hadoop/logs/hadoop-root-tasktracker-rac2.out
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章