kafka消息中間件Java API調用

參考: https://www.orchome.com/451

Kafka集羣的安裝見上文,本文介紹使用Java API通過kafka發送和接收消息。

 

1. kafka客戶端依賴

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.11</artifactId>
    <version>1.0.1</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>1.0.1</version>
</dependency>

2 Kafka消息生產者API

package kafka;

import java.util.Properties;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class ProducerDemo {

    private static final String MY_TOPIC = "my-topic";

    public static void main(String[] args) {
        Properties properties = new Properties();

        // Kafka 服務器地址
        properties.put("bootstrap.servers", "127.0.0.1:9092,127.0.0.1:9093");

        // 消息應答機制
        properties.put("acks", "all");

        // 如果請求失敗,生產者會自動重試,我們指定是0次,如果啓用重試,則會有重複消息的可能性
        properties.put("retries", 0);
        properties.put("batch.size", 16384);

        // 默認緩衝可立即發送,即便緩衝空間還沒有滿,但是,如果你想減少請求的數量,可以設置linger.ms大於0
        properties.put("linger.ms", 1);

        // 控制生產者可用的緩存總量,如果消息發送速度比其傳輸到服務器的快,將會耗盡這個緩存空間
        properties.put("buffer.memory", 33554432);

        // 消息序列化和反序列化方法
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        // 創建併發送消息
        try (Producer<String, String> producer = new KafkaProducer<>(properties)) {
            for (int i = 0; i < 100; i++) {
                String msg = "Message-index-" + i;
                producer.send(new ProducerRecord<>(MY_TOPIC, msg));
                System.out.println("Sent: " + msg);
            }
        }
    }
}

消息發送結果:

 

3. Kafka消息消費者API

package kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Collections;
import java.util.Properties;

public class ConsumerDemo {

    private static final String MY_TOPIC = "my-topic";

    public static void main(String[] args) {
        Properties properties = new Properties();

        // kafka 服務器地址
        properties.put("bootstrap.servers", "127.0.0.1:9092");

        // 當前消費者所在的consumer group
        properties.put("group.id", "group-1");

        // 消息消費後自動提交,也可改爲手動提交
        properties.put("enable.auto.commit", true);

        // 自動提交間隔時間
        properties.put("auto.commit.interval.ms", "1000");
        properties.put("auto.offset.reset", "earliest");

        // 停止心跳的時間超過session.timeout.ms,那麼就會認爲是故障的,它的分區將被分配到別的進程
        properties.put("session.timeout.ms", "30000");
        
        // 消息序列化和反序列化方法
        properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        // 訂閱 my-topic 主題的消息
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
        consumer.subscribe(Collections.singletonList(MY_TOPIC));
        
        // 不停的獲取消息並消費
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000);
            System.out.println("records count: " + records.count());
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
                System.out.println();
            }
        }
    }
}

消息消費的結果如下:

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章