hadoop+hive使用中遇到的問題彙總

問題排查方式

  • 一般的錯誤,查看錯誤輸出,按照關鍵字google

  • 異常錯誤(如namenode、datanode莫名其妙掛了):查看hadoop($HADOOP_HOME/logs)或hive日誌


hadoop錯誤
1.datanode無法正常啓動
添加datanode後,datanode無法正常啓動,進程一會莫名其妙掛掉,查看namenode日誌顯示如下:

Text代碼

2013-06-21 18:53:39,182 FATAL org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node x.x.x.x:50010 is attempting to report storage ID DS-1357535176-x.x.x.x-50010-1371808472808. Node y.y.y.y:50010 is expected to serve this storage.

原因分析:
    拷貝hadoop安裝包時,包含data與tmp文件夾(見本人《hadoop安裝》一文),未成功格式化datanode
解決辦法:

Shell代碼

rm -rf /data/hadoop/hadoop-1.1.2/data

rm -rf /data/hadoop/hadoop-1.1.2/tmp

hadoop datanode -format

2. safe mode

Text代碼

2013-06-20 10:35:43,758 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot renew lease for DFSClient_hb_rs_wdev1.corp.qihoo.net,60020,1371631589073. Name node is in safe mode.

解決方案:

Shell代碼

hadoop dfsadmin -safemode leave

3.連接異常

Text代碼

2013-06-21 19:55:05,801 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to homename/x.x.x.x:9000 failed on local exception: java.io.EOFException

可能原因:

  • namenode監聽127.0.0.1:9000,而非0.0.0.0:9000或外網IP:9000

  • iptables限制


解決方案:

  • 檢查/etc/hosts配置,使得hostname綁定到非127.0.0.1的IP上

  • iptables放開端口



4. namenode id

Text代碼

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode namespaceID = 240012870; datanode namespaceID = 1462711424 . 

問題:Namenode上namespaceID與datanode上namespaceID不一致。

  問題產生原因:每次namenode format會重新創建一個namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的數據,但是沒有清空datanode下的數據,所以造成namenode節點上的namespaceID與 datanode節點上的namespaceID不一致。啓動失敗。

  解決辦法:參考該網址 http://blog.csdn.net/wh62592855/archive/2010/07/21/5752199.aspx 給出兩種解決方法,我們使用的是第一種解決方法:即:

  (1)停掉集羣服務

  (2)在出問題的datanode節點上刪除data目錄,data目錄即是在hdfs-site.xml文件中配置的 dfs.data.dir目錄,本機器上那個是/var/lib/hadoop-0.20/cache/hdfs/dfs/data/ (注:我們當時在所有的datanode和namenode節點上均執行了該步驟。以防刪掉後不成功,可以先把data目錄保存一個副本).

  (3)格式化namenode.

  (4)重新啓動集羣。

  問題解決。
       這種方法帶來的一個副作用即是,hdfs上的所有數據丟失。如果hdfs上存放有重要數據的時候,不建議採用該方法,可以嘗試提供的網址中的第二種方法。

5. 目錄權限
start-dfs.sh執行無錯,顯示啓動datanode,執行完後無datanode。查看datanode機器上的日誌,顯示因dfs.data.dir目錄權限不正確導致:

Text代碼

expected: drwxr-xr-x,current:drwxrwxr-x

解決辦法:
    查看dfs.data.dir的目錄配置,修改權限即可。

hive錯誤
1.NoClassDefFoundError
Could not initialize class java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.io.HbaseObjectWritable
將protobuf-***.jar添加到jars路徑

 Xml代碼

//$HIVE_HOME/conf/hive-site.xml

   hive.aux.jars.path

   file:///data/hadoop/hive-0.10.0/lib/hive-hbase-handler-0.10.0.jar,file:///data/hadoop/hive-0.10.0/lib/hbase-0.94.8.jar,file:///data/hadoop/hive-0.10.0/lib/zookeeper-3.4.5.jar,file:///data/hadoop/hive-0.10.0/lib/guava-r09.jar,file:///data/hadoop/hive-0.10.0/lib/hive-contrib-0.10.0.jar,file:///data/hadoop/hive-0.10.0/lib/protobuf-java-2.4.0a.jar

2.hive動態分區異常
[Fatal Error] Operator FS_2 (id=2): Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode

Shell代碼

hive> set hive.exec.max.dynamic.partitions.pernode = 10000;

3.mapreduce進程超內存限制——hadoop Java heap space
vim mapred-site.xml添加:

Xml代碼

//mapred-site.xml


mapred.child.java.opts


-Xmx2048m


Shell代碼

#$HADOOP_HOME/conf/hadoop_env.sh

export HADOOP_HEAPSIZE=5000

4.hive文件數限制
[Fatal Error] total number of created files now is 100086, which exceeds 100000

Shell代碼

hive> set hive.exec.max.created.files=655350;

5.metastore連接超時

Text代碼

FAILED: SemanticException org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out

解決方案:

Shell代碼

hive> set hive.metastore.client.socket.timeout=500;

6. java.io.IOException: error=7, Argument list too long

Text代碼


Task with the most failures(5): 

-----

Task ID:

  task_201306241630_0189_r_000009


URL:

  http://namenode.godlovesdog.com:50030/taskdetails.jsp?jobid=job_201306241630_0189&tipid=task_201306241630_0189_r_000009

-----

Diagnostic Messages for this Task:

java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"reducesinkkey0":"164058872","reducesinkkey1":"djh,S1","reducesinkkey2":"20130117170703","reducesinkkey3":"xxx"},"value":{"_col0":"1","_col1":"xxx","_col2":"20130117170703","_col3":"164058872","_col4":"xxx,S1"},"alias":0}

at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)

at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:520)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)

at org.apache.hadoop.mapred.Child$4.run(Child.java:255)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)

at org.apache.hadoop.mapred.Child.main(Child.java:249)

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"reducesinkkey0":"164058872","reducesinkkey1":"xxx,S1","reducesinkkey2":"20130117170703","reducesinkkey3":"xxx"},"value":{"_col0":"1","_col1":"xxx","_col2":"20130117170703","_col3":"164058872","_col4":"djh,S1"},"alias":0}

at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)

... 7 more

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20000]: Unable to initialize custom script.

at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:354)

at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)

at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)

at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)

at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)

at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)

at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)

at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)

at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)

... 7 more

Caused by: java.io.IOException: Cannot run program "/usr/bin/python2.7": error=7, 參數列表過長

at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)

at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:313)

... 15 more

Caused by: java.io.IOException: error=7, 參數列表過長

at java.lang.UNIXProcess.forkAndExec(Native Method)

at java.lang.UNIXProcess.(UNIXProcess.java:135)

at java.lang.ProcessImpl.start(ProcessImpl.java:130)

at java.lang.ProcessBuilder.start(ProcessBuilder.java:1023)

... 16 more



FAILED: Execution Error, return code 20000 from org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script.

解決方案:
升級內核或減少分區數https://issues.apache.org/jira/browse/HIVE-2372
6.runtime error

Shell代碼

hive> show tables;

FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

問題排查:

Shell代碼

hive -hiveconf hive.root.logger=DEBUG,console

Text代碼

13/07/15 16:29:24 INFO hive.metastore: Trying to connect to metastore with URI thrift://xxx.xxx.xxx.xxx:9083

13/07/15 16:29:24 WARN hive.metastore: Failed to connect to the MetaStore Server...

org.apache.thrift.transport.TTransportException: java.net.ConnectException: 拒絕連接

。。。

MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: 拒絕連接

嘗試連接9083端口,netstat查看該端口確實沒有被監聽,第一反應是hiveserver沒有正常啓動。查看hiveserver進程卻存在,只是監聽10000端口。
查看hive-site.xml配置,hive客戶端連接9083端口,而hiveserver默認監聽10000,找到問題根源了
解決辦法:

Shell代碼

hive --service hiveserver -p 9083

//或修改$HIVE_HOME/conf/hive-site.xml的hive.metastore.uris部分

//將端口改爲10000


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章