2015-04-22 14:17:29,908 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.100:53310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2015-04-22 14:17:29,908 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-172857601-192.168.1.100-1429683180778 (storage id DS-1882029846-192.168.1.100-50010-1429520454466) service to master/192.168.1.100:53310
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:439)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
2015-04-22 14:17:34,910 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.100:53310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2015-04-22 14:17:34,910 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-172857601-192.168.1.100-1429683180778 (storage id DS-1882029846-192.168.1.100-50010-1429520454466) service to master/192.168.1.100:53310
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:439)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
原因可能是我之前有四臺機器,每臺機器有6塊磁盤,其中有一臺機器的磁盤目錄與另外3臺目錄名稱不一樣,當時創建dfs.data.dir目錄的時候,好像沒創建成功,所以就做各種格式化操作了。
解決辦法
1、先把集羣停掉
2、刪除hadoop.tmp.dir、dfs.name.dir、dfs.journalnode.edits.dir 等配置目錄
3、刪除dfs.data.dir目錄
4、重新執行如下步驟
##在每個節點上把zookeeper服務啓動
zkServer.sh start
##在某一namenode節點上執行如下命令,創建命名空間
hdfs zkfc -formatZK
##在每個節點用如下命令啓日誌程序
hadoop-daemon.sh start journalnode
##在主namenode節點格式化namenode和journalnode目錄
hadoop namenode -format mycluster
##在主namenode節點啓動namenode進程
hadoop-daemon.sh start namenode
##如下命令是把備namenode節點的目錄格式化並把元數據從主namenode節點拷貝過來(在備節點執行)
hdfs namenode -bootstrapStandby
##啓動備節點的namenode
hadoop-daemon.sh start namenode
##在兩個namenode節點都啓動zkfc服務
hadoop-daemon.sh start zkfc
##在所有的datanode節點上啓動datanode
hadoop-daemon.sh start datanode