The maximum path component name limit

今日同事一個測試的任務經常異常退出

查看相關job日誌:

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException): The maximum path component name limit of job_1542872443206_7299723-1551753291148-lf_cp_serv-%2D%2D%E5%93%81%E7%89%8C%E6%96%B0%E5%A2%9E%2D%E5%93%81%E7%89%8C%E8%BF%91%E5%85%AD%E6%9C%88%E7%B4%AF%E8%AE%A1%E5%AD%90%E6%96%B0%E5%A2%9E%E7%94%A8%E6%88%B7%E5%88%86%E5%B8%83%2D%E5%9C%B0%E5%B8%82%0Aselect+city_no%2C...t%28Stage-1551753352217-95-0-SUCCEEDED-root.ia_serv-1551753296447.jhist_tmp in directory /user/history/done_intermediate/lf_cp_serv is exceeded: limit=255 length=341 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxComponentLength(FSDirectory.java:2224) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:2335) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addLastINode(FSDirectory.java:2304) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addINode(FSDirectory.java:2087) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addFile(FSDirectory.java:390) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2949) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2826) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:602) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220)

 解決方法:

1:修改參數dfs.namenode.fs-limits.max-component-length,將其修改未0不限制,但是太長的文件名並不被hadoop所推薦,因爲會影響到hadoop的性能

2:設定jobname ,set mapreduce.job.name=XXX

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章