Unable to close file because the last block does not have enough number of replicas報錯分析

一、問題

跑spark或hive腳本報錯如下:

[INFO] 2020-03-31 11:06:03  -> java.io.IOException: Unable to close file because the last block does not have enough number of replicas.
		at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2266)
		at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2233)
		at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
		at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
		at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:54)
		at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
		at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
		at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
		at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
		at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:356)
		at org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:478)
		at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11$$anonfun$apply$6.apply(Client.scala:600)
		at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11$$anonfun$apply$6.apply(Client.scala:599)
		at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
		at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11.apply(Client.scala:599)
		at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$11.apply(Client.scala:598)
		at scala.collection.immutable.List.foreach(List.scala:381)
		at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:598)
		at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:869)
		at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
		at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
		at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
		at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
		at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
		at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
		at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
		at scala.Option.getOrElse(Option.scala:121)
		at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
		at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:48)
		at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:317)
		at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:166)
		at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
		at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
		at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
		at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
		at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
		at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
	20/03/31 11:06:03 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!

二、分析

 java.io.IOException: Unable to close file because the last block does not have enough number of replicas.

這個報錯直觀的意思是因爲最後一個block快沒有足夠的副本而不能關閉文件。直觀的理解很容易認爲是Namenode在爲client請求分配block存儲地址失敗,是NameNode的原因。網上也大多是網絡或CPU的原因類似的答覆,卻沒有拋出根本的原因來。

以下是hdfs寫數據的流程:
在這裏插入圖片描述
具體hdfs寫文件的流程描述詳見不再贅述:https://www.cnblogs.com/Java-Script/p/11090379.html
當datanode建立的pipline寫流程(取決於最後一個replica寫完)都完成後,會依次建立返回接收成功的標誌給上游datanode,最後返回到客戶端,這時表示寫數據成功??這裏加了問號,說明是有問題的。說明上述的第9步流程不全。在最後寫完block數據後,client會執行關閉流的操作,關閉流的時候,會rpc請求namenode說寫文件已經完成了,回調成功,則返回true,表示寫的流程徹底完成,上述圖片流程少了客戶端跟NameNode確認文件寫完成的這一步,爲了驗證,根據上表報錯打印的類方法信息,追溯到這裏,直接上源碼 org.apache.hadoop.hdfs.DFSOutputStream.completeFile

  private void completeFile(ExtendedBlock last) throws IOException {
    long localstart = Time.monotonicNow();
    // 重試的間隔時間,默認400毫秒,無法更改
    long localTimeout = 400;
    boolean fileComplete = false;
    // 重試rpc請求的次數,默認5次
    int retries = dfsClient.getConf().nBlockWriteLocateFollowingRetry;
    // 只有請求不成功纔會進入下面的流程
    while (!fileComplete) {
    // 這裏會rpc請求NameNode,告訴NameNode數據已經寫完成了,
    //NameNode接受到client的請求回覆個true就  表示寫文件大功告成
      fileComplete =
          dfsClient.namenode.complete(src, dfsClient.clientName, last, fileId);
      if (!fileComplete) {
        final int hdfsTimeout = dfsClient.getHdfsTimeout();
        if (!dfsClient.clientRunning
            || (hdfsTimeout > 0 
                && localstart + hdfsTimeout < Time.monotonicNow())) {
            String msg = "Unable to close file because dfsclient " +
                          " was unable to contact the HDFS servers." +
                          " clientRunning " + dfsClient.clientRunning +
                          " hdfsTimeout " + hdfsTimeout;
            DFSClient.LOG.info(msg);
            throw new IOException(msg);
        }
        try {
          // 重試完5次後,就報了上面的錯
          if (retries == 0) {
            throw new IOException("Unable to close file because the last block"
                + " does not have enough number of replicas.");
          }
          retries--;
          Thread.sleep(localTimeout);
          localTimeout *= 2;
          if (Time.monotonicNow() - localstart > 5000) {
            DFSClient.LOG.info("Could not complete " + src + " retrying...");
          }
        } catch (InterruptedException ie) {
          DFSClient.LOG.warn("Caught exception ", ie);
        }
      }
    }
  }

說明根本原因是在於寫完文件後關閉流的時候客戶端與NameNode rpc請求出了問題!!

三、分析監控

在這裏插入圖片描述
上圖是NameNode rpc隊列處理時間,11:06分有一個峯值,有rpc請求堆積,腳本報錯時間是11:06,剛好吻合,說明上面分析正確。這個峯值範圍內,可能在跑大量寫數據的任務,NameNode壓力大,處理不過來,導致請求堆積延遲,理論上所有單點的NameNode集羣規模到了一定量級都會有這個問題

四、解決方案

短線:增加retry次數,默認5次,dfs.client.block.write.locateFollowingBlock.retries 改成10或更多
長線:現有的hadoop搭了HA,但是始終只有一個active狀態,所有壓力都在一個NameNode上,搭建hadoop Federation能同時多個active的NameNode,這樣分擔單點壓力

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章