Hadoop集羣(三節點)安裝與部署

 

1.2.1 環境準備

環境由三臺服務器組成,分別爲目錄節點,內容節點,服務器列表如下所示:

表1  主機環境準備

IP

機器名稱

10.0.0.201

m1.hadoop

10.0.0.209

s1.hadoop

10.0.0.211

s2.hadoop

下面列出各主機配置信息:

主機:m1.hadoop

[hadoop@m1 .ssh]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

NM_CONTROLLED="yes"

ONBOOT=yes

TYPE=Ethernet

BOOTPROTO=none

IPADDR=10.0.0.201

PREFIX=24

GATEWAY=10.0.0.254

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME="System eth0"

HWADDR=10:50:56:AF:00:CF

[hadoop@m1 .ssh]$ cat /etc/hosts

10.0.0.201   m1.hadoop

10.0.0.209   s1.hadoop

10.0.0.211   s2.hadoop

127.0.0.1       localhost.localdomain   localhost

 [hadoop@m1 .ssh]$ cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=m1.hadoop

FORWARD_IPV4=yes

 

主機:s1.hadoop

[hadoop@s1 .ssh]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

NM_CONTROLLED="yes"

ONBOOT=yes

HWADDR=10:50:56:AF:00:D4

TYPE=Ethernet

BOOTPROTO=none

IPADDR=10.0.0.209

PREFIX=24

GATEWAY=10.0.0.254

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME="System eth0"

 [hadoop@s1 .ssh]$ cat /etc/hosts

10.0.0.209   s1.hadoop

10.0.0.201   m1.hadoop

10.0.0.211   s2.hadoop

127.0.0.1   localhost.localdomain    localhost

 [hadoop@s1 .ssh]$ cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=s1.hadoop

主機:s2.hadoop

[hadoop@s2 .ssh]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

NM_CONTROLLED="yes"

ONBOOT=yes

HWADDR=01:50:56:AF:00:D7

TYPE=Ethernet

BOOTPROTO=none

IPADDR=10.0.0.211

PREFIX=24

GATEWAY=10.0.0.254

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME="System eth0"

 [hadoop@s2 .ssh]$ cat /etc/hosts

10.0.0.211   s2.hadoop

10.0.0.201   m1.hadoop

10.0.0.209   s1.hadoop

127.0.0.1   localhost.localdomain    localhost

[hadoop@s2 .ssh]$ cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=s2.hadoop

 

1.2.2 Java多機安裝

將下載到後java文件傳至各主機/home目錄中,下面可以進行對其進行安裝:

[root@s1 home]# chmod u+x jdk-6u25-linux-x64-rpm.bin

[root@s1 home]# ./jdk-6u25-linux-x64-rpm.bin

 

1.2.3 SSH配置

在每臺機器上創建hadoop帳戶,在每臺機器生成hadoop的的公私鑰對,分別將上述公鑰對寫入到authorized_keys文件之中,將authorized_keys分別分發至各個主機~/.ssh/目錄之中即可。

具體設置過程如下所示:

S1.hadoop主機:

[root@ s1 .ssh]# useradd hadoop  #創建帳號

[root@ s1 .ssh]# passwd hadoop  #配置密碼

[root@ s1 .ssh 5]# su hadoop 

[hadoop@s1 .ssh]$ssh-keygen

[hadoop@s1 .ssh]$chmod 700 ~/.ssh/

[hadoop@m1 .ssh]$ cat id_rsa.pub >> authorized_keys

[hadoop@m1 .ssh]$ chmod 600 authorized_keys

[hadoop@m1 .ssh]$ scp authorized_keys [email protected]:/home/hadoop/.ssh/

s2.hadoop主機:

[root@ s2 .ssh]# useradd hadoop  #創建帳號

[root@ s2 .ssh]# passwd hadoop  #配置密碼

[root@ s2 .ssh 5]# su hadoop 

[hadoop@s2 .ssh]$ssh-keygen

[hadoop@s2 .ssh]$chmod 700 ~/.ssh/

[hadoop@m2 .ssh]$ cat id_rsa.pub >> authorized_keys

[hadoop@m1 .ssh]$ scp authorized_keys [email protected]:/home/hadoop/.ssh/

m1.hadoop主機:

[root@ s1 .ssh]# useradd hadoop  #創建帳號

[root@ s1 .ssh]# passwd hadoop  #配置密碼

[root@ s1 .ssh 5]# su hadoop 

[hadoop@s1 .ssh]$ssh-keygen

[hadoop@s1 .ssh]$chmod 700 ~/.ssh/

[hadoop@m1 .ssh]$ cat id_rsa.pub >> authorized_keys

[hadoop@m1 .ssh]$ scp authorized_keys [email protected]:/home/hadoop/.ssh/

[hadoop@m1 .ssh]$ scp authorized_keys [email protected]:/home/hadoop/.ssh/

 

1.2.4 Hadoop多機安裝

Hadoop安裝與配置過程見1.1.4節,先在m1.hadoop主機配置hadoop,安裝hadoop、配置訪問權限、配置環境變量:

具體操作過程(m1.hadoop):

[root@m1 home]# tar xzvf hadoop-0.20.2.tar.gz

[root@ m1home]# mv hadoop-0.20.2 /usr/local

[root@ m1home]# cd /usr/local

[root@ m1local]# ls

bin  etc  games  hadoop-0.20.2  include  lib  lib64  libexec  sbin  share  src

[root@ m1local]# mv hadoop-0.20.2/ hadoop

[root@ m1local]# mkdir hadoop/Data

[root@ m1local]# mkdir hadoop/Name

[root@ m1local]# mkdir hadoop/Tmp

[root@ m1local]# chmod 777 /var/local

[root@ m1local]# ls

bin  etc  games  hadoop  include  lib  lib64  libexec  sbin  share  src

[root@ m1local]# chown -R hadoop:hadoop /usr/local/hadoop/ #修改權限

[root@m1 conf]# vi core-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

     <property>

         <name>fs.default.name</name>

         <value>hdfs://m1.hadoop:9000</value>

     </property>

      <property>

               <name>hadoop.tmp.dir</name>

                        <value>/usr/local/hadoop/Tmp</value>

                             </property>

 

</configuration>

[root@m1 conf]# vi hdfs-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

         <name>dfs.name.dir</name>

                  <value>/usr/local/hadoop/Name</value>

                       </property>

 

<property>

         <name>dfs.data.dir</name>

                  <value>/usr/local/hadoop/Data</value>

                       </property>

 

<property>

         <name>dfs.replication</name>

         <value>3</value>

     </property>

 

</configuration>

[root@m1 conf]# vi masters

m1.hadoop

[root@m1 conf]# vi slaves

m1.hadoop

s1.hadoop

s2.hadoop

[root@m1 conf]# vi mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

 <property>

         <name>mapred.job.tracker</name>

         <value>m1.hadoop:9001</value>

     </property>

 

</configuration>

[root@ m1local]# scp -r /usr/local/hadoop s1.hadoop:/usr/local/

[root@ m1local]# scp -r /usr/local/hadoop s2.hadoop:/usr/local/

 

(s1.hadoop):

[root@ s1local]# chmod 777 /var/local

(s2.hadoop):

[root@ s2local]# chmod 777 /var/local

 

1.2.5 Hadoop測試

[root@m1 conf]# jps

10209 Jps

9057 SecondaryNameNode

9542 SecondaryNameNode

7217 JobTracker

10087 TaskTracker

9450 DataNode

 


發佈了83 篇原創文章 · 獲贊 5 · 訪問量 14萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章