Windows環境Docker環境搭建Hadoop3.2+zookeeper3.5.5+HBase2.2高可用集羣(一)

參考1:https://blog.csdn.net/ccren/article/details/93485200

參考2:https://blog.csdn.net/qq_40235064/article/details/89074917

參考3:https://blog.csdn.net/qq_19524879/article/details/83659747

四個節點

docker run -itd --name hadoop1 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5002:22 -p 9870:9870 -p 8088:8088 -p 19888:19888 hadoop-ha-yarn /usr/sbin/sshd -D
docker run -itd --name hadoop2 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5003:22 -p 9871:9870 hadoop-ha-yarn /usr/sbin/sshd -D
docker run -itd --name hadoop3 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5004:22 -p 8087:8088 hadoop-ha-yarn /usr/sbin/sshd -D
docker run -itd --name hadoop4 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5005:22 -p 8086:8088 hadoop-ha-yarn /usr/sbin/sshd -D 

集羣規劃

hostname namenode datanode zookeeper zkfc journalnode resourcemanager nodemanager
hadoop1 1 1 1 1 1    
hadoop2 1 1 1 1 1   1
hadoop3   1 1   1 1 1
hadoop4     1     1 1

鏡像製作好後啓動容器:

docker run -itd --name hadoop1 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5002:22 -p 9870:9870 -p 8088:8088 -p 19888:19888 hadoop-ha-yarn /usr/sbin/sshd -D
docker run -itd --name hadoop2 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5003:22 -p 9871:9870 hadoop-ha-yarn /usr/sbin/sshd -D
docker run -itd --name hadoop3 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5004:22 -p 8087:8088 hadoop-ha-yarn /usr/sbin/sshd -D
docker run -itd --name hadoop4 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5005:22 -p 8086:8088 hadoop-ha-yarn /usr/sbin/sshd -D 

使用CRT之類的客戶端ssh登上去操作

1.啓動zookeeper 4臺(出現一個leader,三個follower,啓動成功)
# bin/zkServer.sh start # bin/zkServer.sh status
2.啓動journalnode(分別在hadoop1、hadoop2、hadoop3上執行)
# cd /opt/modules/ha-hadoop/hadoop-3.1.2/
# bin/hdfs --daemon start journalnode
或者以下命令也是開啓 journalnode
# sbin/hadoop-deamon.sh start journalnode
# jps
1553 Jps
993 QuorumPeerMain
1514 JournalNode
# jps
993 QuorumPeerMain
1514 JournalNode
1563 Jps
出現JournalNode則表示journalnode啓動成功。
3.格式化namenode(只要格式化一臺,另一臺同步,兩臺都格式化,你就做錯了!!如:在hadoop2節點上)
# bin/hdfs namenode -format
如果在倒數8行左右的地方,出現這一句就表示成功
INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
啓動namenode
# bin/hdfs --daemon start namenode
# jps
993 QuorumPeerMain
1681 NameNode
1747 Jps
1514 JournalNode
#cat /home/hadoop/tmp/dfs/name/current/VERSION
#Tue Jun 25 14:59:40 UTC 2019
namespaceID=1463663733
clusterID=CID-32938cd0-ed33-40f6-90c5-2326198e31bd
cTime=1561474780005
storageType=NAME_NODE
blockpoolID=BP-789586919-172.17.0.2-1561474780005
layoutVersion=-65
4.同步hadoop1元數據到hadoop2中(必須先啓動hadoop1節點上的namenode)
在hadoop2主機上執行:
# bin/hdfs namenode -bootstrapStandby
如果出現:INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
說明同步成功
# cat /home/hadoop/tmp/dfs/name/current/VERSION
#Tue Jun 25 15:07:27 UTC 2019
namespaceID=1463663733
clusterID=CID-32938cd0-ed33-40f6-90c5-2326198e31bd
cTime=1561474780005
storageType=NAME_NODE
blockpoolID=BP-789586919-172.17.0.2-1561474780005
layoutVersion=-65
hadoop1和hadoop2顯示的信息一致,則namenode數據同步成功
5.格式化ZKFC(在hadoop1上執行一次即可)
# bin/hdfs zkfc -formatZK
若在倒數第4行顯示:INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
則表示ZKFC格式化成功
6.啓動HDFS、yarn(在hadoop1上執行)
# sbin/start-dfs.sh
# sbin/start-yarn.sh
7.訪問http://192.168.99.100:9870,http://192.168.99.100:8086

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章