大數據環境部署3:Hadoop環境部署



一、安裝Hadoop

0、下載安裝包

Wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

1、解壓tar-xzvf hadoop-2.6.0.tar.gz 

2move到指定目錄下:[spark@LOCALHOST]$mv hadoop-2.6.0 ~/opt/ 

3、進入hadoop目前  [spark@LOCALHOSTopt]$ cd hadoop-2.6.0/
[spark@LOCALHOST hadoop-2.6.0]$ ls
bin  dfs etc  include  input  lib  libexec  LICENSE.txt logs  NOTICE.txt  README.txt  sbin  share  tmp

 配置之前,先在本地文件系統創建以下文件夾:~/hadoop/tmp~/dfs/data~/dfs/name主要涉及的配置文件有7個:都在/hadoop/etc/hadoop文件夾下,可以用gedit命令對其進行編輯。

~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml

4、進入hadoop配置文件目錄

[spark@LOCALHOST hadoop-2.6.0]$ cd etc/hadoop/
[spark@LOCALHOST hadoop]$
ls
capacity-scheduler.xml  hadoop-env.sh              httpfs-env.sh            kms-env.sh           mapred-env.sh              ssl-client.xml.example
configuration.xsl       hadoop-metrics2.properties httpfs-log4j.properties  kms-log4j.properties mapred-queues.xml.template  ssl-server.xml.example
container-executor.cfg  hadoop-metrics.properties   httpfs-signature.secret kms-site.xml          mapred-site.xml            yarn-env.cmd
core-site.xml           hadoop-policy.xml          httpfs-site.xml         log4j.properties      mapred-site.xml.template   yarn-env.sh
hadoop-env.cmd          hdfs-site.xml              kms-acls.xml            mapred-env.cmd        slaves                     yarn-site.xml

4.1、配置 hadoop-env.sh文件-->修改JAVA_HOME

# The java implementation to use.
exportJAVA_HOME=/usr/java/jdk1.7.0_79

4.2、配置 yarn-env.sh文件-->>修改JAVA_HOME

# some Java parameters

 exportJAVA_HOME=/usr/java/jdk1.7.0_79

4.3、配置slaves文件-->>增加slave節點 

172.16.107.9

172.16.107.8

172.16.107.7

4.4、配置 core-site.xml文件-->>增加hadoop核心配置(hdfs文件端口是9000file:/home/spark/opt/hadoop-2.6.0/tmp、)

<configuration>
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://LOCALHOST:9000</value>
 </property>

 <property>
  <name>io.file.buffer.size</name>
  <value>131072</value>
 </property>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>file:/home/spark/opt/hadoop-2.6.0/tmp</value>
  <description>Abasefor other temporarydirectories.</description>
 </property>
 <property>
  <name>hadoop.proxyuser.spark.hosts</name>
  <value>*</value>
 </property>
< property>
  <name>hadoop.proxyuser.spark.groups</name>
  <value>*</value>
 </property>
< /configuration>

4.5、配置  hdfs-site.xml 文件-->>增加hdfs配置信息(namenodedatanode端口和目錄位置)

<configuration>
 <property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>LOCALHOST:9001</value>
 </property>

  <property>
   <name>dfs.namenode.name.dir</name>
  <value>file:/home/spark/opt/hadoop-2.6.0/dfs/name</value>
 </property>

 <property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/spark/opt/hadoop-2.6.0/dfs/data</value>
  </property>

 <property>
  <name>dfs.replication</name>
  <value>3</value>
 </property>

 <property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
 </property>

< /configuration>

4.6、配置 mapred-site.xml 文件-->>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)

<configuration>
  <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
 <property>
  <name>mapreduce.jobhistory.address</name>
  <value>LOCALHOST:10020</value>
 </property>
 <property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>LOCALHOST:19888</value>
 </property>
< /configuration>

4.7、配置  yarn-site.xml  文件-->>增加yarn功能

<configuration>
  <property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
  </property>
  <property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
   <name>yarn.resourcemanager.address</name>
   <value>LOCALHOST:8032</value>
  </property>
  <property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>LOCALHOST:8030</value>
  </property>
  <property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>LOCALHOST:8035</value>
  </property>
  <property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>LOCALHOST:8033</value>
  </property>
  <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>LOCALHOST:8088</value>
  </property>

< /configuration>

5、將配置好的hadoop文件copy到另外兩臺slave機器上

[spark@LOCALHOST opt]$scp -rhadoop-2.6.0/ [email protected]:~/opt/

[spark@LOCALHOST opt]$scp -rhadoop-2.6.0/ [email protected]:~/opt/

 

二、驗證Hadoop

1、格式化namenode:

[spark@LOCALHOST opt]$ cd hadoop-2.6.0/
[spark@LOCALHOST hadoop-2.6.0]$ ls
bin  dfs  etc  include  input  lib  libexec LICENSE.txt  logs  NOTICE.txt  README.txt  sbin share  tmp
[spark@LOCALHOST hadoop-2.6.0]$
./bin/hdfs namenode -format

[spark@S1PA222 .ssh]$ cd~/opt/hadoop-2.6.0
[spark@S1PA222 hadoop-2.6.0]$
./bin/hdfs  namenode -format

2、啓動hdfs:

[spark@LOCALHOST hadoop-2.6.0]$./sbin/start-dfs.sh 
[spark@LOCALHOST hadoop-2.6.0]
$ jps
3、停止hdfs:

[spark@LOCALHOST hadoop-2.6.0]$./sbin/stop-dfs.sh 
[spark@LOCALHOST hadoop-2.6.0]$
jps
4、啓動yarn:

[spark@LOCALHOST hadoop-2.6.0]$./sbin/start-yarn.sh 
[spark@LOCALHOST hadoop-2.6.0]$
jps
5、停止yarn:

[spark@LOCALHOST hadoop-2.6.0]$./sbin/stop-yarn.sh 
[spark@LOCALHOST hadoop-2.6.0]$
jps
6、查看集羣狀態:

[spark@LOCALHOST hadoop-2.6.0]$ ./bin/hdfsdfsadmin -report
7
、查看hdfshttp://172.16.107.9:50070/

8、查看RMhttp://172.16.107.9:8088/

9、啓動所有Hadoop進程

[spark@LOCALHOST hadoop-2.6.0]$ ./sbin/start-all.sh 

10、停止所有Hadoop進程

[spark@LOCALHOST hadoop-2.6.0]$ ./sbin/stop-all.sh 

 

參考:

http://blog.csdn.net/stark_summer/article/details/42424279


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章