kafka topic consumer 消費非常大的消息

有業務上推送到kafka的json串非常大,json文件達到了6m,差不多36萬行,內部嵌套四層,需要我們從kafka中接收數據並進行解析。
在這裏插入圖片描述
在測試過程中,需要自己將該json串生產到測試的topic,發現這麼大的字符串,沒有辦法從控制檯那裏粘貼上去。此處我們是用java寫個生產者,讀取文件然後發送值topic。然而不報錯,也消費不到。

這種情況下,需要配置kafka相關的一些參數,以下相關的參數放上來。

Consumer side : fetch.message.max.bytes

  • this will determine the largest size of a message that can be fetched by the consumer.

Broker side : replica.fetch.max.bytes

  • this will allow for the replicas in the brokers to send messages within the cluster and make sure the messages are replicated correctly. If this is too small, then the message will never be replicated, and therefore, the consumer will never see the message because the message will never be committed (fully replicated).

Broker side : message.max.bytes

  • this is the largest size of the message that can be received by the broker from a producer.

Broker side (per topic) : max.message.bytes

  • this is the largest size of the message the broker will allow to be appended to the topic. This size is validated pre-compression. (Defaults to broker’s message.max.bytes.)

Producer: Increase max.request.size
to send the larger message.

Consumer: Increase max.partition.fetch.bytes
to receive larger messages.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章