大數據學習初級入門教程(十二) —— Hadoop 2.x 集羣和 Zookeeper 3.x 集羣做集成

在以前一篇《大數據學習初級入門教程(一) —— Hadoop 2.x 完全分佈式集羣的安裝、啓動和測試》中,詳細寫了 Hadoop 完全分佈式集羣的安裝步驟,在上一篇《大數據學習初級入門教程(十一) —— Zookeeper 3.4.6 完全分佈式集羣的安裝、配置、啓動和測試》中詳細寫了 Zookeeper 完全分佈式集羣的安裝步驟,所以這篇寫一下如何讓兩個集羣做集成,讓 Zookeeper 保證 Hadoop 集羣的高可用。

一、環境準備

上面兩篇文章中已經詳細敘述,這裏略過。

規劃節點:node19(namenode),node18(namenode),node11(datanode),node12(datanode),node13(datanode),特別注意:兩臺及以上的 namenode 要互相免密碼登陸。

二、刪掉文件

刪掉原 Hadoop 集羣中各個節點中配置的 masters 文件:

# rm -rf /home/hadoop-2.5.1/etc/hadoop/masters

刪掉 hadoop 集羣啓動生成的文件:

# rm -rf /opt/hadoop

注意:上面刪除文件操作在所有 Hadoop 節點執行。

三、修改配置文件

修改 hdfs-site.xml 配置文件,內容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>dfs.nameservices</name>
  <value>mllcluster</value>
</property>
<property>
  <name>dfs.ha.namenodes.mllcluster</name>
  <value>nn19,nn18</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mllcluster.nn19</name>
  <value>node19:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mllcluster.nn18</name>
  <value>node18:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mllcluster.nn19</name>
  <value>node19:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mllcluster.nn18</name>
  <value>node18:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node11:8485;node12:8485;node13:8485/mllcluster</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mllcluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/root/.ssh/id_dsa</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/opt/journalnode/data</value>
</property>
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>
<!--
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node18:50090</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.https-address</name>
        <value>node18:50091</value>
    </property>
-->

修改 core-site.xml 配置文件,內容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mllcluster</value>
<!--        <value>hdfs://node19:9000</value>-->
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop/hadoop-2.5</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node19:2181,node18:2181,node11:2181</value>
    </property>
</configuration>

從上面配置可以看到配置了 Zookeeper 集羣。

四、配置其它節點

拷貝配置文件到其它機器,使得各個節點的配置保持一致。

scp -r ./* root@node18:/home/hadoop-2.5.1/etc/hadoop/
scp -r ./* root@node11:/home/hadoop-2.5.1/etc/hadoop/
scp -r ./* root@node12:/home/hadoop-2.5.1/etc/hadoop/
scp -r ./* root@node13:/home/hadoop-2.5.1/etc/hadoop/

五、啓動 JournalNode 節點

通過第三步,可以看到 JournalNode 節點配置在 node11、node12 和 node 13,通過下面命令啓動 JournalNode 節點,

# hadoop-daemon.sh start journalnode

備註:

JournalNode 也是一個小集羣,用來對 hadoop 集羣的 edits、fsimage 文件進行合併操作。

停止命令是:hadoop-daemon.sh stop journalnode

檢查是否啓動成功,命令如下:

# more /home/hadoop-2.5.1/logs/hadoop-root-journalnode-node13.log

無錯誤日誌輸出則表示啓動成功。

六、格式化 Hadoop 集羣

在一臺 NameNode 節點上執行命令,這裏在 node18 節點上執行下面命令:

# hdfs namenode -format

執行後把格式化生成的文件,全部同步到另外一臺 NameNode 節點:

# scp -r root@node18:/opt/hadoop /opt/

七、初始化 HA 配置

在其中一臺 namenode 上,初始化高可用(HA)配置數據,命令如下:

# hdfs zkfc -formatZK

八、啓動 Hadoop 集羣

# start-dfs.sh

啓動集羣信息如下:

Starting namenodes on [node18 node19]
node18: starting namenode, logging to /home/hadoop-2.5.1/logs/hadoop-root-namenode-node18.out
node19: starting namenode, logging to /home/hadoop-2.5.1/logs/hadoop-root-namenode-node19.out
node12: starting datanode, logging to /home/hadoop-2.5.1/logs/hadoop-root-datanode-node12.out
node13: starting datanode, logging to /home/hadoop-2.5.1/logs/hadoop-root-datanode-node13.out
node11: starting datanode, logging to /home/hadoop-2.5.1/logs/hadoop-root-datanode-node11.out
Starting journal nodes [node11 node12 node13]
node13: journalnode running as process 1915. Stop it first.
node11: journalnode running as process 2324. Stop it first.
node12: journalnode running as process 2010. Stop it first.
Starting ZK Failover Controllers on NN hosts [node18 node19]
node18: starting zkfc, logging to /home/hadoop-2.5.1/logs/hadoop-root-zkfc-node18.out
node19: starting zkfc, logging to /home/hadoop-2.5.1/logs/hadoop-root-zkfc-node19.out

由於 journalnode 節點在第五步已經啓動,可以看到日誌提示已經運行中。

九、測試集羣是否正常

除了通過查看集羣日誌有無報錯來判斷集羣是否啓動正常外,也可以通過 jps 命令查看集羣進程信息:

[root@node19 hadoop]# jps
6417 DFSZKFailoverController
6129 NameNode
5298 QuorumPeerMain
6495 Jps

[root@node18 ~]# jps
3521 NameNode
3617 DFSZKFailoverController
1465 QuorumPeerMain
3695 Jps

[root@node11 ~]# jps
2337 JournalNode
2247 DataNode
1485 QuorumPeerMain
2398 Jps

[root@node12 ~]# jps
2210 DataNode
2296 JournalNode
1468 QuorumPeerMain
2351 Jps

[root@node13 ~]# jps
2293 JournalNode
2350 Jps
2207 DataNode
1455 QuorumPeerMain

還有一種方式可以測試集羣是否正常啓動,就是可以直接通過瀏覽器訪問監控頁面 http://192.168.220.19:50070/ 和 http://192.168.220.18:50070/,可以分別看到如下頁面信息:

十、測試高可用

測試兩個 NameNode 是不是可以高可用,可以 kill 掉 active 的 NameNode 節點,這裏是 node19,另一個 standby 節點就會變爲 active。

通過 jps 可以看到 node19 上的 NameNode 節點進程信息爲:

6129 NameNode

# kill -9 6129

再次訪問監控頁面,可以看到 node18 已變爲 active 狀態:

十一、其它一些命令

啓動 kill 掉的 namenode 節點:hadoop-daemon.sh start namenode

手動切換 active 節點:hdfs haadmin -transitionToActive nn19

注意:自動切換開啓後該命令不成功,需關閉自動切換才能好用。

創建目錄:hdfs dfs -mkdir /test

十二、停止 Hadoop 集羣

# stop-dfs.sh

停止集羣信息如下:

Stopping namenodes on [node18 node19]
node18: stopping namenode
node19: stopping namenode
node11: stopping datanode
node12: stopping datanode
node13: stopping datanode
Stopping journal nodes [node11 node12 node13]
node11: stopping journalnode
node13: stopping journalnode
node12: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [node18 node19]
node18: stopping zkfc
node19: stopping zkfc

到此,Hadoop 集羣和 Zookeeper 集羣的集成基本完成。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章