一、Hadoop是什麼?
- Hadoop是分佈式系統基礎架構;
- 主要包括HDFS(分佈式文件系統)、YARN(資源調度系統)、MapReduce(分佈式計算框架)三部分構成。
二、Hadoop能幹什麼?
- 使用戶可以在不瞭解分佈式底層細節的情況下,開發分佈式程序;
- 充分利用集羣的威力,進行大規模數據的高速運算和存儲。
三、Hadoop HA(zk、ssh已配置好)
3.1機器規劃
master | NameNode | DataNode | Zookeeper | JournalNode |
slave1 | NameNode | DataNode | Zookeeper | JournalNode |
slave2 | DataNode | Zookeeper | JournalNode |
3.2配置
1. core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:///opt/app/hadoop-2.6.0-cdh5.7.0/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>master:2181,slave1:2181,slave2:2181</value>
</property>
</configuration>
2. hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/app/hadoop-2.6.0-cdh5.7.0/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/app/hadoop-2.6.0-cdh5.7.0/data</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!-- <property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>-->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>master:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>slave1:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>master:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>slave1:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://master:8485;slave1:8485;slave2:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/app/hadoop-2.6.0-cdh5.7.0/journalnode</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.permissions.enable</name>
<value>false</value>
</property>
</configuration>
3. mapreduce-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
4. yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>slave1</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>master:2181,slave1:2181,slave2:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
3.3 啓動
1. 開啓zookeeper集羣(每臺機器)
zkServer.sh start /opt/app/zookeeper-3.4.5-cdh5.7.0/conf/zoo_cluster.cfg
2. 開啓journalnode(每臺機器)
hadoop-daemon.sh start journalnode
3. 格式化hdfs
hdfs namenode -format
# 錯誤:
java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.
# 原因:
照着網上的博客搭建的,hdfs設置namenode集羣
<property>
<name>dfs.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
# 格式化後一直報上面這個錯誤,後查閱文檔,發現在Hadoop2.6.0中配置Namenode集羣使用這個屬性。
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
```
4. 重新格式化,成功
log:20/02/23 12:24:17 INFO common.Storage: Storage directory /opt/app/hadoop-2.6.0-cdh5.7.0/name has been successfully formatted.
5. 拷貝master上的元數據信息到slave1上
scp -r /opt/app/hadoop-2.6.0-cdh5.7.0/name/* slave1:/opt/app/hadoop-2.6.0-cdh5.7.0/name/
6. 在master節點格式化zkfc,一定要執行這步,不然後面namenode都處於standy模式
hdfs zkfc -formatZK
log:20/02/23 13:13:29 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
然後啓動dfs:hdfs start-dfs.sh
執行hadoop命令:hadoop fs -ls /
# 錯誤
Operation category READ is not supported in state standby. Visit https://s.a
# 原因:Namenode處於standby模式
# 解決
# 停止dfs stop-dfs.sh
# 格式化ZK hdfs zkfc -formatZK
7.啓動dfs: start-dfs.sh
啓動起來了,但是發現DataNode沒有啓動起來。
查看日誌:
namenode下的clusterId和datanode的clusterId不一致
修改data/current/VERSION clusterID= name/current/VERSION clusterID
8.啓動dfs,ok!!!
3.4 HA測試
3.4.1 Namenode HA測試
1. 當前狀態:
- hdfs haadmin -getServiceState nn1
- nn1:active
- hdfs haadmin -getServiceState nn2
- nn2:standby
2.模擬active namenode掛掉
- kill master namenode
3.查看slave1 namenode
發現還是處於standby模式,hdfs集羣不可用
也就是不能自動切換active,查閱博客,查看zkfc的log日誌,提示fuser:command not found。
4.安裝插件:yum install psmisc
5.啓動dfs,再次測試,成功
6. 只啓動namenode或datanode的命令,生產中不允許總是格式化或全部重啓,一般都是針對點解決
hadoop-daemon.sh start|stop namenode|datanode| journalnode
yarn-daemon.sh start |stop resourcemanager|nodemanager
3.4.2 YARN HA測試
1. slave1的上resourcemanager需要手動啓動
- yarn-daemon.sh start resourcemanager
2. 獲取yarn的狀態
- yarn rmadmin -getServiceState rm1
- rm1:active
- yarn rmadmin -getServiceState rm2
- rm2:standby
3.模擬master gg
- kill rm1
- yarn rmadmin -getServiceState rm2
- rm2:active
3.5 hadoop HA場景下 java客戶端如何遠程訪問HDFS?
方法1:將所有關於namenode的參數寫入Configuration對象中。(建議第一種)
方法2:將hadoop集羣配置文件core-site.xml和hdfs-site.xml文件拷貝到項目的src目錄下。
轉載連接:https://blog.csdn.net/wo198711203217/article/details/80528860
https://blog.csdn.net/twj0823/article/details/84346176