ERROR Failed to clean up log for __consumer_offsets-30 in dir D:\kafka_2.13-2.5.0\kafka-logs

ERROR Failed to clean up log for __consumer_offsets-30 in dir D:\kafka_2.13-2.5.0\kafka-logs due to IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.cleaned -> D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.swap: 另一個程序正在使用此文件,進程無法訪問。

        at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
        at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
        at java.nio.file.Files.move(Files.java:1395)
        at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:834)
        at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:207)
        at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497)
        at kafka.log.Log.$anonfun$replaceSegments$4(Log.scala:2269)
        at kafka.log.Log.$anonfun$replaceSegments$4$adapted(Log.scala:2269)
        at scala.collection.immutable.List.foreach(List.scala:305)
        at kafka.log.Log.replaceSegments(Log.scala:2269)
        at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:594)
        at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:519)
        at kafka.log.Cleaner.doClean(LogCleaner.scala:518)
        at kafka.log.Cleaner.clean(LogCleaner.scala:492)
        at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:361)
        at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:334)
        at kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:314)
        at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:303)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
        Suppressed: java.nio.file.FileSystemException: D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.cleaned -> D:\kafka_2.13-2.5.0\kafka-logs\__consumer_offsets-30\00000000000000000000.timeindex.swap: 另一個程序正在使用此文件,進程無法訪問。

                at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
                at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
                at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
                at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
                at java.nio.file.Files.move(Files.java:1395)
                at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:831)
                ... 15 more
[2020-07-03 19:30:24,414] WARN [ReplicaManager broker=0] Stopping serving replicas in dir D:\kafka_2.13-2.5.0\kafka-logs (kafka.server.ReplicaManager)

kafka在啓動的時候不久,就會出現這樣的問題:

引用:https://community.microstrategy.com/s/article/Kafka-could-not-be-started-due-to-Failed-to-clean-up-log-for-consumer-offsets-in-MicroStrategy-10-x?language=en_US

Why is this happening? 

This is caused by a defect in Apache Kafka where the service crashes upon trying to clean up data files that have exceeded the retention policy. This crash can occur after the service has been running for some time, or on startup. The data files contain all data that has been received by the Telemetry Server (i.e. Platform Analytics Statistics and DSSErrors log contents) and by default are automatically cleaned up after 7 days. See the Apache website for more details of the Kafka issue.

人家說法是這樣子的,kafka試圖清理超出保留策略的數據文件,從而引起了服務崩潰。

https://issues.apache.org/jira/browse/KAFKA-7278 裏面有挺多的相同問題的說明,大概看了下是kafka本身的一個問題,但是對於問題的處理,現在還是要考慮的,現在這個問題的本質是什麼。

上面有記錄發生問題的時間,但是不確定下一次是什麼時候會出現。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章