本次進行一個項目的重構,在某些活動數據量比較大的情況下,會偶爾出現1200s超時的情況,如下:
AttemptID:attempt_1410771599055_11709_m_000033_0 Timed out after 1200 secs
而hadoop會不斷啓動備份任務進行重試,重試也許成功,但失敗的概率還是比較大:
經過分析,hadoop的任務都有個超時時間,使用下面的參數設置,表示1200s後如果沒有進展,就會任務該任務超時,將其狀態設置爲FAILED。
-Dmapreduce.task.timeout=1200000
到底因爲什麼原因導致超時?爲了繼續分析這個問題,我們將這個參數設置得非常之大。
調整超時參數
將超時時間設置爲24小時之後,發現任務不會FAILED,但是其執行了大概40多個小時,仍然還沒有執行完成。
還好我們在任務執行過程中打了不少的log,以幫助分析問題。經過日誌的分析,我們發現有下面的現象:
2014-09-22 00:17:29,005 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 77477) is collected! 2014-09-22 00:17:29,005 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 77477) is collected! 2014-09-22 00:17:29,005 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 77477) is collected! 2014-09-22 01:17:29,054 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 120096) is collected! 2014-09-22 01:17:29,064 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 120096) is collected! 2014-09-22 01:17:29,064 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 120096) is collected! 2014-09-22 01:17:29,064 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 120096) is collected! ... 2014-09-22 01:17:36,590 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 164747) is collected! 2014-09-22 01:17:36,590 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 164747) is collected! 2014-09-22 01:17:36,590 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 164747) is collected! 2014-09-22 01:17:36,590 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 164747) is collected! 2014-09-22 02:17:36,674 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 158198) is collected! 2014-09-22 02:17:36,683 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 158198) is collected! 2014-09-22 02:17:36,683 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 158198) is collected! 2014-09-22 02:17:36,683 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 158198) is collected! .... 2014-09-22 02:17:40,888 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 203233) is collected! 2014-09-22 02:17:40,888 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 203233) is collected! 2014-09-22 02:17:40,888 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 203233) is collected! 2014-09-22 03:17:40,925 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 79188) is collected! 2014-09-22 03:17:40,934 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 79188) is collected! 2014-09-22 03:17:40,934 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 79188) is collected! 2014-09-22 03:17:40,934 INFO [main] com.xxx.yo.phase1.Phase1Mapper: history(caid: 2000037, superid: 79188) is collected!
日誌分析得出的結論便是,程序總會在某個時間點休息3600秒(大概1個小時),然後再執行一會兒,便又休息3600秒。
hadoop configuration中得出初步結論
我們在這1個小時對該java進程進行監控,發現該進程在此期間(jstack命令查看其日誌),一直在一個點等待:
"main" prio=10 tid=0x000000000293f000 nid=0x1e06 runnable [0x0000000041b20000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:81) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) - locked <0x00000006e243c3f0> (a sun.nio.ch.Util$2) - locked <0x00000006e243c3e0> (a java.util.Collections$UnmodifiableSet) - locked <0x00000006e243c1a0> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102) at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:170) at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:135) - locked <0x00000006e12dcc78> (a org.apache.hadoop.hdfs.RemoteBlockReader2) at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:642) at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:698) - eliminated <0x00000006e12dcc18> (a org.apache.hadoop.hdfs.DFSInputStream) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:752) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793) - locked <0x00000006e12dcc18> (a org.apache.hadoop.hdfs.DFSInputStream) at java.io.DataInputStream.read(DataInputStream.java:149) at com.xxx.app.MzSequenceFile$PartInputStream.read(MzSequenceFile.java:451) at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:159) at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:143) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) at java.io.DataInputStream.readFully(DataInputStream.java:195) at org.apache.hadoop.io.Text.readFields(Text.java:292) at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71) at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42) at com.xxx.app.MzSequenceFile$Reader.deserializeValue(MzSequenceFile.java:672) at com.xxx.app.MzSequenceFile$Reader.next(MzSequenceFile.java:684) at com.xxx.app.MzSequenceFile$Reader.next(MzSequenceFile.java:692) at com.xxx.yo.io.CombineFileRawLogReader.streamNext(CombineFileRawLogReader.java:284) at com.xxx.yo.io.CombineFileRawLogReader.next(CombineFileRawLogReader.java:342) at com.xxx.yo.io.CampaignRawLogReader.next(CampaignRawLogReader.java:73) at com.xxx.yo.io.CampaignRawLogReader.next(CampaignRawLogReader.java:23) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197) - locked <0x00000006e01dd3e0> (a org.apache.hadoop.mapred.MapTask$TrackedRecordReader) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183) - locked <0x00000006e01dd3e0> (a org.apache.hadoop.mapred.MapTask$TrackedRecordReader) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
當時初步懷疑是JDK版本問題,Java NIO也確實存在着臭名昭著的epoll空輪詢導致CPU100%問題,但這個bug在JDK6中的高版本就已經解決了,更何況我們使用的是1.7。而且我們通過top -p 進程id的方式查看其CPU佔用率爲0,排除了該bug。
經過日誌的初步分析,發現3600s這個線索,從job的configuration中,初步查找出參數dfs.client.socket-timeout,單位毫秒。
-Ddfs.client.socket-timeout=3600000
試驗性地將這個參數修改爲60ms,可以看出出現超時的概率非常大,但會不斷重試以繼續:
2014-09-26 12:53:03,184 WARN [main] org.apache.hadoop.hdfs.DFSClient: Failed to connect to /192.168.7.22:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.7.17:22051 remote=/192.168.7.22:50010] java.net.SocketTimeoutException: 60 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.7.17:22051 remote=/192.168.7.22:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1490) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392) at org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:131) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1108) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:533) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:601) at java.io.DataInputStream.readInt(DataInputStream.java:387) at com.xxx.app.MzSequenceFile$Reader.init(MzSequenceFile.java:521) at com.xxx.app.MzSequenceFile$Reader.<init>(MzSequenceFile.java:515) at com.xxx.app.MzSequenceFile$Reader.<init>(MzSequenceFile.java:505) at com.xxx.yo.io.CombineFileRawLogReader.<init>(CombineFileRawLogReader.java:146) at com.xxx.yo.io.CampaignRawLogReader.next(CampaignRawLogReader.java:64) at com.xxx.yo.io.CampaignRawLogReader.next(CampaignRawLogReader.java:22) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
於是,最終將這個參數設置爲60s,這其實也是集羣最終默認的超時時間,由於之前一次不明就裏的優化,導致了後續這些問題的發生,因此在調整參數時,一定要注意瞭解清楚該參數造成的影響。
簡要分析的結論
在Mapper端讀取HDFS上的文件時,可能由於網絡原因(由於我們的Split切分地比較大,因此不可能做到完全數據本地化)導致讀取數據超時,原來居然設置成1個小時,而任務的超時時間僅設置爲20分鐘,因此只要發生讀取數據超時,就必然會引起任務超時。
通過這次分析過程,學到了很多查找問題的方式,包括通過現象分析規律,得到線索,最終查找問題的原因。快速測試,不能忽略哪怕一個小的Exception,不行就是分析hadoop的源碼,掌握其運行時行爲。
如果task tracker在一段時間(默認是10min,可以通過mapred.task.timeout屬性值來設置,單位是毫秒)內一直沒有收到它的進度報告,則把它標記爲失效。