kafka connector commit 失敗

1. 上棧

[2016-07-01 15:58:55,889] ERROR Commit of Thread[WorkerSinkTask-beaver_http_response-connector-0,5,main] offsets threw an unexpected exception:  (org.apache.kafka.connect.runtime.WorkerSinkTask:101)
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed due to group rebalance
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
    at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
    at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
    at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:288)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:180)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:861)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:828)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:171)
    at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
    at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
    at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
[2016-07-01 15:58:55,889] ERROR Commit of Thread[WorkerSinkTask-beaver_http_response-connector-8,5,main] offsets threw an unexpected exception:  (org.apache.kafka.connect.runtime.WorkerSinkTask:101)

2. 原因

有個很重要的概念,kafka會管理如何分配分區的消費
(1)當某個consumer消費了數據,但是在特定的時間內沒有commit,它就認爲consumer掛了。這個時候就要reblance了。這個時間和“heartbeat.interval.ms”配置相關。
(2)每次consumer從kafka poll數據時,每次poll會有一個量的控制,“max.partition.fetch.bytes”配置決定。

拋開kafka connect而言,上面這個兩個配置要協調好,如果max.partition.fetch.bytes過大,heartbeat.interval.ms時間過短。恭喜你,肯定就開始上面的棧了。

好,把kafka connect拉過來piu piu吧。它有個很低調的配置:“offset.flush.interval.ms”。這個配置意義很重大,它是commit offset的時間間隔,默認是60秒。我檢查了下我的配置:

"heartbeat.interval.ms":"30000"

也就是說30秒如果kafka connector不給kafka commit offset,就要reblance了。改了配置後,整個世界清靜了。

3. 遺留問題

這個棧只有數據量灰常大的時候纔會出現,爲什麼數據量小的時候不出現?

\kafka-2.0.0\clients\src\main\java\org\apache\kafka\clients\consumer\KafkaConsumer.java

上面的類的pollOnce方法中也許有答案。

喜歡看洋文的騷年,這裏有相關文章:
http://stackoverflow.com/questions/35658171/kafka-commitfailedexception-consumer-exception
配置說明,這裏找:
https://kafka.apache.org/090/configuration.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章