kafka搭建後生產者消費者Java示例

kafka搭建可以按照官方示例來操作 http://kafka.apache.org/quickstart

說明:本次使用的版本爲kafka_2.12-2.2.0

官方示例也有生產者,消費者的 shell版本示例,通常執行起來也很正常

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

但是接下來使用Java示例來模擬生產者,消費者,直接上代碼。
事先說明Java代碼本身是沒有問題的,但是執行會報錯,解決起來還是很痛苦的

package com.alioo;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;

import java.util.Date;
import java.util.Properties;

public class KafkaProducerNew {

    private final KafkaProducer<String, String> producer;

    public final static String TOPIC = "test";

    private KafkaProducerNew() {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "mytest1:9092");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

        producer = new KafkaProducer<String, String>(props);
    }

    public void produce() {
        int messageNo = 1;
        final int COUNT = 10;

        while(messageNo < COUNT) {
            String key = String.valueOf(messageNo);
            String data = String.format("hello KafkaProducer message(%s) %s",new Date(), key);

            try {
//                ProducerRecord record= new ProducerRecord<String, String>(TOPIC,1,null, data);
                ProducerRecord record= new ProducerRecord<String, String>(TOPIC,null,null, data);
                producer.send(record);
            } catch (Exception e) {
                e.printStackTrace();
            }catch (Throwable e) {
                System.out.println("Throwable...");
                e.printStackTrace();
            }

            messageNo++;
        }

        producer.close();
    }

    public static void main(String[] args) {
        new KafkaProducerNew().produce();
    }

}
package com.alioo;

import org.apache.kafka.clients.consumer.*;

import java.util.Arrays;
import java.util.Properties;

public class KafkaConsumerNew {

    private Consumer<String, String> consumer;

    private static String group = "group-1";

    private static String TOPIC = "test";

    private KafkaConsumerNew() {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mytest1:9092");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, group);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // earliest  latest
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); // 自動commit
        props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); // 自動commit的間隔
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        consumer = new KafkaConsumer<String, String>(props);
    }

    private void consume() {
        consumer.subscribe(Arrays.asList(TOPIC)); // 可消費多個topic,組成一個list

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(10);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s \n", record.offset(), record.key(), record.value());
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }
    }

    public static void main(String[] args) {
        new KafkaConsumerNew().consume();
    }
}

啓動生產者KafkaProducerNew,報錯信息如下:

[ERROR 2019-05-17 16:42:13.269] [kafka-producer-network-thread | producer-1] [org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:237)] [Producer clientId=producer-1] Uncaught error in kafka producer I/O thread:
java.lang.IllegalStateException: No entry found for connection 0
        at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:339) ~[kafka-clients-2.2.0.jar:?]
        at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:143) ~[kafka-clients-2.2.0.jar:?]
        at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:926) ~[kafka-clients-2.2.0.jar:?]
        at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287) ~[kafka-clients-2.2.0.jar:?]
        at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:337) ~[kafka-clients-2.2.0.jar:?]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:310) ~[kafka-clients-2.2.0.jar:?]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.2.0.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
[DEBUG 2019-05-17 16:42:13.269] [kafka-producer-network-thread | producer-1] [org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:241)] [Producer clientId=producer-1] Beginning shutdown of Kafka producer I/O thread, sending remaining records.

解決起來也很簡單,改一行配置即可

config/server.properties
之前的內容
listeners=PLAINTEXT://:9092
修改後的內容
listeners=PLAINTEXT://10.3.114.70:9092

重啓kafka即可生效(修改正確後可以通過netstat來看效果)


# netstat -tulnp|grep 9092
tcp6       0      0 :::9092                 :::*                    LISTEN      210349/java
# netstat -tulnp|grep 9092
tcp6       0      0 10.3.114.70:9092        :::*                    LISTEN      210927/java

這個時候再執行Java生產者示例KafkaProducerNew 就可以正常運行了。

  • 觀察Java消費者示例KafkaConsumerNew 可以正常消費。
  • 觀察shell消費者示例bin/kafka-console-consumer.sh 可以正常消費。

說明:解決Java程序中的問題的關鍵是將日誌配置好,便於將錯誤日誌打印出來

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章