CentOS6.4-X64下Hadoop-2.0.0-CHD4僞分佈安裝(單機)!

廢話不說,直接切入正題~!

一、卸載Openjdk

先用 java –version查看OpenJDK是否安裝;

如安裝,輸入:rpm -qa | grepjava

顯示如下信息:

<span style="font-size:14px;">java_cup-0.10k-5.el6.x86_64
libvirt-java-0.4.7-1.el6.noarch
tzdata-java-2012c-1.el6.noarch
pki-java-tools-9.0.3-24.el6.noarch
java-1.6.0-openjdk-devel-1.6.0.0-1.45.1.11.1.el6.x86_64
java-1.6.0-openjdk-1.6.0.0-1.45.1.11.1.el6.x86_64
libvirt-java-devel-0.4.7-1.el6.noarch
java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64</span>


卸載:

<span style="font-size:14px;">rpm -e --nodeps java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64
rpm -e --nodeps java-1.6.0-openjdk-devel-1.6.0.0-1.45.1.11.1.el6.x86_64
rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.45.1.11.1.el6.x86_64
</span>

一、安裝JDK

①  下載jdk文件,並上傳至虛擬機;

②  mkdir /java,並用cp命令將java文件拷貝至次目錄下;

③  tar –zxvf jdk-7u60-linux-x64.tar   生成目錄jdk1.7.0_60

④  配置環境變量,運行命令:vi /etc/profile

在最後添加:

<span style="font-size:14px;"># set java environment
export JAVA_HOME=java/jdk1.7.0_60
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
</span>

①  source /etc/profile

②  輸入java –version查看,如顯示:

<span style="font-size:14px;">java version "1.7.0_60"
Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)
</span>

表示配置正確。


一、建立ssh無密碼登錄本機

1.        創建ssh-key,

ssh-keygen -t rsa -P ""

如圖:(連續回車即可)


創建成功;

2.   進入~/.ssh目錄下,將id_rsa.pub追加到authorized_keys授權文件中,開始是沒有authorized_keys文件的;

<span style="font-size:14px;">cd ~/.ssh
cat id_rsa.pub >> authorized_keys
</span>


3.     登錄localhost,驗證配置是否成功。

ssh localhost

 

一、安裝hadoop2.2.0

1.        解壓縮文件至home目錄下;

tar -zxvf /home/HFile/Hadoop-cdh4.7.tar.gz -C /home


2.        配置環境變量,在根目錄下進入etc/profile,添加如下:

<span style="font-size:14px;">#set Hadoop environment
export HADOOP_HOME=/home/Hadoop-cdh4.7  
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/Hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/Hadoop
</span>


3.        進入Hadoop-cdh4.7中etc/hadoop中,

①    修改hadoop-env.sh

export JAVA_HOME=/java/jdk1.7.0_60

②    vi core-site.xml

<span style="font-size:14px;"><configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop-cdh4.7/tmp</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<final>true</final>
</property> 

<!--
<property>
  <name>hadoop.native.lib</name>
  <value>true</value>
  <description>Should native hadoop libraries, if present, be used.</description>
</property>
-->
</configuration>
</span>

③   vi hdfs-site.xml

<span style="font-size:14px;"><property>  
      <name>dfs.namenode.name.dir</name>
      <value>file:/home/hadoop-cdh4.7/dfs/name</value>
      <final>true</final>
    </property>

    <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/home/hadoop-cdh4.7/dfs/data</value>
      <final>true</final>
    </property>

    <property>
      <name>dfs.replication</name>
      <value>1</value>
    </property>

    <property>
      <name>dfs.permissions</name>
      <value>false</value>
    </property>
</span>


④    複製mapred-site.xml.template成mapred-site.xml,修改mapred-site.xml

cp mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<span style="font-size:14px;"><property>
          <name>mapreduce.framework.name</name>
          <value>yarn</value>
        </property>
        <property>
          <name>mapreduce.job.tracker</name>
          <value>localhost:9101</value>
          <final>true</final>
        </property> 
</span>

⑤     vi yarn-site.xml
<span style="font-size:14px;"><property>
      <name>yarn.resourcemanager.hostname</name>
      <value>localhost</value>
      <description>hostanem of RM</description>
    </property>
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce.shuffle</value>
      <description>shuffle service that needs to be set for Map Reduce to run </description>
    </property>
<property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property> 

</span>
<span style="font-size:14px;">
</span>
<span style="font-size:14px;"><span style="color:#ff0000;">注:在cdh4中爲mapreduce.shuffle,cdh5中爲mapreduce_shuffle
否則nodemanager啓動不了。</span>
</span>

⑥    運行hadoop namenode –format

⑦    接着執行start-all.sh

⑧    執行jps,如如下圖所示,則表示正常啓動:


⑨    最後進入Hadoop-cdh4.7\share\hadoop\mapreduce錄入中,測試運行:

hadoop jar hadoop-mapreduce-examples-2.2.0.jar randomwriter out

查看運行是否成功。


    查看集羣狀態:

hadoop dfsadmin –report

    如果datanode啓動不了檢查是不是防火牆沒有關閉:

關閉命令:service iptables stop


4.        運行wordcount任務測試:

①    建立file1與file2,及input文件夾:

<span style="font-size:14px;">echo “hello world bye wold” > file1
echo “hello hadoop bye hadoop”>file2
hadoop fs –mkdir /input
</span>

②    將file文件傳入input
<span style="font-size:14px;">hadoop fs -put /home/hadoop-cdh4.7/file* /input</span>

③    運行wordcound程序:

<span style="font-size:14px;">hadoop jar /home/hadoop-cdh4.7/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.0.0-cdh4.7.0.jar wordcount /input /output1</span>

④    查看結果:

hadoop fs -ls /output1



hadoop fs -cat /output1/part-r-00000


至此hadoop安裝徹底完成!



其他注意問題:

cdh是編譯之後的包,可直接用於X64的系統,如果是Apache官網下載的Hadoop*.tar.gz的包爲32的系統,需要重新編譯纔可以用於64位系統。






發佈了24 篇原創文章 · 獲贊 19 · 訪問量 11萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章