一篇文章讀懂Kafka消息隊列

消息隊列的概念

可以用於系統間通訊的一個組件-middle ware(中間件),該組件可以用於做消息緩衝的中間件(持久化)解決一些 併發處理、數據庫緩衝等實現對高併發的業務場景的削峯填谷。

使用消息隊列的場景分析

異步消息發送:

使用Kafka MQ功能實現模塊間異步通信,把一些費時的操作交給額外的服務或者設備去執行,這樣可以提升系統運行效率,加速連接釋放的速度,例如:用戶註冊模塊,在用戶註冊成功後,業務系統需要給用戶發送一個通知短信,通知用戶登錄郵箱去激活剛註冊的用戶信息。這種業務場景如圖所示,因爲短信通知和郵件發送是一個比較耗時的操作,所以在這裏沒必要將短信和郵件發送作爲註冊模塊的流程,使用Message Queue功能可以將改業務和主業務註冊分離,這樣可以縮短用戶瀏覽器和服務建立的鏈接時間,同時也能滿足發送短信和郵件的業務。

系統間解耦合

①在某些高吞吐的業務場景下,可能會出現在某一個時間段系統負載寫入的負載壓力比較大,短時間有大量的數據需要持久化到數據庫中,但是由於數據的持久化需要數據庫提供服務,由於傳統的數據庫甚至一些NoSQL產品也不能很好的解決高併發寫入,因爲數據庫除去要向用戶提供鏈接之外,還需要對新來的數據做持久化,這就需要一定的時間才能將數據落地到磁盤。因此在高併發寫入的場景,就需要用戶集成Message Queue在數據庫前作爲緩衝隊列。在隊列的另一頭只需要程序有條不紊的將數據寫入到數據庫即可,這就保證無論外界寫入壓力有多麼大都可以藉助於Message Queue緩解數據庫的壓力。

②Message Queue除了解決對數據緩衝的壓力之外,還可以充當業務系統的中間件(Middleware)作爲系統服務間解耦的組件存在,例如上圖所示訂單模塊和庫存模塊中就可以使用Message Queue作爲緩衝隊列實現業務系統服務間的解耦,也就意味着即使服務在運行期間庫存系統宕機也並不會影響訂單系統的正常運行。

Kafka 架構

Kafka集羣以Topic形式負責管理集羣中的Record,每一個Record屬於一個Topic。底層Kafka集羣通過日誌分區形式持久化Record。在Kafka集羣中,Topic的每一個分區都一定會有1個Borker擔當該分區的Leader,其他的Broker擔當該分區的follower(取決於分區的副本因子)。一旦對應分區的Lead宕機,kafka集羣會給當前的分區指定新的Borker作爲該分區的Leader。分區的Leader的選舉是通過Zookeeper一些特性實現的,這裏就不在概述了。Leader負責對應分區的讀寫操作,Follower負責數據備份操作。
Kafka架構圖

Kafka集羣安裝

準備工作

準備三臺主機名分別爲CentOSA|CentOSB|CentOSC的Linux系統主機
分別關閉防火牆、相互做主機名映射、校對物理時鐘、安裝配置JDK8

安裝Zookeeper集羣確保Kafka集羣的正常運行

tar -zxf zookeeper-3.4.6.tar.gz -C /usr/
mkdir /root/zkdata

#分別在三臺機器執行以下命令
echo 1 >> /root/zkdata/myid
echo 2 >> /root/zkdata/myid
echo 3 >> /root/zkdata/myid

touch /usr/zookeeper-3.4.6/conf/zoo.cfg
vim /usr/zookeeper-3.4.6/conf/zoo.cfg

zoo.cfg

tickTime=2000
dataDir=/root/zkdata
clientPort=2181
initLimit=5
syncLimit=2

server.1=CentOSA:2887:3887
server.2=CentOSB:2887:3887
server.3=CentOSC:2887:3887

啓動zookeeper|查看zookeeper當前狀態

/usr/zookeeper-3.4.6/bin/zkServer.sh start zoo.cfg
/usr/zookeeper-3.4.6/bin/zkServer.sh status zoo.cfg

Kafka安裝步驟

  • 下載Kafka服務安裝包http://archive.apache.org/dist/kafka/2.2.0/kafka_2.11-2.2.0.tgz
tar -zxf kafka_2.11-2.2.0.tgz -C /usr
vim /usr/kafka_2.11-2.2.0/config/server.properties
############################# Server Basics #############################
broker.id=[0|1|2]  #三臺機器分別 0/1/2
############################# Socket Server Settings #############################
listeners=PLAINTEXT://CentOS[A|B|C]:9092 #三臺機器分別A、B、C
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/usr/kafka-logs
############################# Zookeeper #############################
zookeeper.connect=CentOSA:2181,CentOSB:2181,CentOSC:2181

注:此配置只能使用主機名訪問 如需IP訪問 將listeners=PLAINTEXT://CentOS[A|B|C]:9092 #三臺機器分別A、B、C
改爲
advertised.listeners=PLAINTEXT://x.x.x.x:9092

啓動服務

cd /usr/kafka_2.11-2.2.0/
./bin/kafka-server-start.sh -daemon config/server.properties

測試

  • 創建topic
./bin/kafka-topics.sh --zookeeper CentOSA:2181,CentOSB:2181,CentOSC:2181 --create --topic topic01 --partitions 3 --replication-factor 3
  • 消費者
./bin/kafka-console-consumer.sh  --bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 --topic topic01
  • 生產者
./bin/kafka-console-producer.sh --broker-list CentOSA:9092,CentOSB:9092,CentOSC:9092 --topic topic01

Topic 和 日誌

Kafka集羣是通過日誌形式存儲Topic中的Record,Record會根據分區策略計算得到的分區數存儲到相應分區的文件中。每個分區都是一個有序的,不可變的記錄序列,不斷附加到結構化的commit-log中。每個分區文件會爲Record進去分區的順序進行編排。每一個分區中的Record都有一個id,該id標示了該record進入分區的先後順序,通常將該id稱爲record在分區中的offset偏移量從0開始,依次遞增。
在這裏插入圖片描述

  • Kafka集羣持久地保留所有已發佈的記錄 - 無論它們是否已被消耗 - 使用可配置的保留時間。例如,如果保留策略設置爲2天,則在發佈記錄後的2天內,它可供使用,之後將被丟棄以釋放空間。Kafka的性能在數據大小方面實際上是恆定的,因此長時間存儲數據不是問題。

  • 事實上,基於每個消費者保留的唯一元數據是該消費者在日誌中的偏移或位置。這種offset由消費者控制:通常消費者在讀取記錄時會線性地增加其偏移量,但事實上,由於消費者控制位置,它可以按照自己喜歡的任何順序消費記錄。例如,消費者可以重置爲較舊的偏移量以重新處理過去的數據,或者跳到最近的記錄並從“現在”開始消費。

生產者

生產者負責發送Record到Kafka集羣中的Topic中。在發佈消息的時候,首先先計算Record分區計算方案有三種:

①如果用戶沒有指定分區但是指定了key信息,生產者會根據hash(key)%分區數計算該Record所屬分區信息。
②如果生產者在發送消息的時候並沒有key,也沒有指定分區數,生產者會使用輪訓策略選擇分區信息。
③如果指定了分區信息,就按照指定的分區信息選擇對應的分區;當分區參數確定以後生產者會找到相應分區的Leader節點將Record記錄寫入到Topic日誌存儲分區中。

消費者

消費者作爲消息的消費放,消費者對Topic中消息的消費方式是以Group爲單位進行消費,Kafka服務器會自動的按照組內和組間對消費者消費的分區進行協調。
在這裏插入圖片描述

  • 組內均分分區,確保一個組內的消費者不可重複消費分區中的數據,一般來說一個組內的消費者實例對的數目應該小於或者等於分區數目。
  • 組間廣播形式消費,確保所有組都可以拿到當前Record。組間數據之間可以保證對數據的獨立消費。

Topic管理篇(DDL)

創建Tocpic

./bin/kafka-topics.sh
--zookeeper CentOSA:2181,CentOSB:2181,CentOSC:2181 --create --topic topic01 --partitions 3 --replication-factor 3

Topic詳細信息

./bin/kafka-topics.sh  --describe  --zookeeper CentOSA:2181,CentOSB:2181,CentOSC:2181  --topic topic01

刪除Topic

./bin/kafka-topics.sh 
--zookeeper CentOSA:2181,CentOSB:2181,CentOSC:2181  --delete  --topic topic01

如果用戶沒有配置delete.topic.enable=true,則Topic刪除不起作用。

Topic列表

./bin/kafka-topics.sh  --zookeeper CentOSA:2181,CentOSB:2181,CentOSC:2181  --list

Kafka API實戰(JDK1.8+)

快速入門

Maven依賴

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.2.0</version>
</dependency>

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.25</version>
</dependency>
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.5</version>
</dependency>

引入log4j.properies

### set log levels ###
log4j.rootLogger = info,stdout 
### 輸出到控制檯 ###
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern =%p %d %c %m %n

在Windos配置主機名和IP映射關係

192.168.111.128 CentOSA
192.168.111.129 CentOSB
192.168.111.130 CentOSC

生產者

package com.msk.demo01;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.text.DecimalFormat;
import java.util.Properties;

public class KafkaProducerDemo {
    public static void main(String[] args) {
        //1.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");

        //2.創建Kafka生產者
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);

        //3.構建ProducerRecord
        for (int i=0;i<10;i++){
            DecimalFormat decimalFormat = new DecimalFormat("000");
            ProducerRecord<String, String> record = new ProducerRecord<String, String>("topic04", decimalFormat.format(i), "value" + i);
            //4.發送消息
            producer.send(record);
        }
        //5.清空緩衝區
        producer.flush();
        //6.關閉生產者
        producer.close();
    }
}

消費者

package com.msk.demo01;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;

public class KafkaConsumerDemo {
    public static void main(String[] args) {
        //1.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.GROUP_ID_CONFIG,"group1");


        //2.創建Kafka消費者
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        //3.訂閱topics
        consumer.subscribe(Arrays.asList("topic01"));
        //4.死循環讀取消息
        while(true){
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
            if(records!=null && !records.isEmpty()){
                for (ConsumerRecord<String, String> record : records) {
                    int partition = record.partition();
                    long offset = record.offset();
                    long timestamp = record.timestamp();
                    String key = record.key();
                    String value = record.value();
                    System.out.println(partition+"\t"+offset+"\t"+timestamp+"\t"+key+"\t"+value);
                }
            }
        }
    }
}

讀取數據偏移量控制

默認當用戶使用subscribe方式訂閱topic消息, 默認首次offset策略是latest。當用戶第一次訂閱topic在消費者訂閱之前的數據是無法消費到 消息的。用戶可以配置消費端參數auto.offset.reset控制kafka消費者行爲。

Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.GROUP_ID_CONFIG,"group1");

props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");//默認值 latest

因爲消費端在使用consumer.poll數據的時候,底層會定時的向Kafka服務器提交消費的偏移量。默認消費端的offset是自動提交的,用戶如果不希望自動提交偏移量可以配置如下參數

注意如果用戶使用subscribe方式訂閱topic,在消費端必須指定group.id,這樣Kafka才能夠實現消費>端負載均衡以及實現組內均分組件廣播。(推薦方式)

默認配置

enable.auto.commit	= true
auto.commit.interval.ms	= 5000
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"false");

手動提交偏移量

public class KafkaConsumerDemo {
    public static void main(String[] args) {
        //1.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.GROUP_ID_CONFIG,"group1");

        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
        //2.創建Kafka消費者
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        //3.訂閱topics
        consumer.subscribe(Arrays.asList("topic01"));
        //4.死循環讀取消息
        while(true){
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
            if(records!=null && !records.isEmpty()){
                Map<TopicPartition, OffsetAndMetadata> offsetMeta=new HashMap<>();
                for (ConsumerRecord<String, String> record : records) {
                    int partition = record.partition();
                    long offset = record.offset();
                    long timestamp = record.timestamp();
                    String key = record.key();
                    String value = record.value();
                    System.out.println(partition+"\t"+offset+"\t"+timestamp+"\t"+key+"\t"+value);

                    TopicPartition part = new TopicPartition("topic03", partition);
                    OffsetAndMetadata oam=new OffsetAndMetadata(offset+1);//設置下一次讀取起始位置
                    offsetMeta.put(part,oam);
                }
                consumer.commitSync(offsetMeta);
            }
        }
    }
}

指定消費分區

通過assign方式kafka對消費者的組管理策略失效。也就是說用戶可以無需配置組ID。

public class KafkaConsumerDemo {
    public static void main(String[] args) {
        //1.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        
        //2.創建Kafka消費者
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        //3.指定分區
        consumer.assign(Arrays.asList(new TopicPartition("topic01",1)));
        consumer.seek(new TopicPartition("topic01",1),1);
        //4.死循環讀取消息
        while(true){
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
            if(records!=null && !records.isEmpty()){
                for (ConsumerRecord<String, String> record : records) {
                    int partition = record.partition();
                    long offset = record.offset();
                    long timestamp = record.timestamp();
                    String key = record.key();
                    String value = record.value();
                    System.out.println(partition+"\t"+offset+"\t"+timestamp+"\t"+key+"\t"+value);
                }
            }
        }
    }
}

Kafka發送/接收Object

生產Object

public interface Serializer<T> extends Closeable {
   
    void configure(Map<String, ?> configs, boolean isKey);
    //重點實現serialize
    byte[] serialize(String topic, T data);
    default byte[] serialize(String topic, Headers headers, T data) {
        return serialize(topic, data);
    }
    @Override
    void close();
}

消費Object

public interface Deserializer<T> extends Closeable {

    void configure(Map<String, ?> configs, boolean isKey);
    //重點實現方法
    T deserialize(String topic, byte[] data);
    default T deserialize(String topic, Headers headers, byte[] data) {
        return deserialize(topic, data);
    }
    @Override
    void close();
}

實現序列化和反序列化

public class ObjectCodec implements Deserializer<Object>, Serializer<Object> {
    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {
        
    }

    @Override
    public byte[] serialize(String topic, Object data) {
        return SerializationUtils.serialize((Serializable) data);
    }

    @Override
    public Object deserialize(String topic, byte[] data) {
        return SerializationUtils.deserialize(data);
    }

    @Override
    public void close() {

    }
}

生產者冪等性

冪等:多次操作最終的影響等價與一次操作稱爲冪等性操作,所有的讀操作一定是冪等的.所有的寫操作一定不是冪等的.當 生產者和broker默認有acks應答機制,如果當生產者發送完數據給broker之後如果沒有在規定的時間內收到應答,生產者可以考慮重發數據.可以通過一下配置參數提升生產者的可靠性.

acks = all // 0 無需應答  n 應答個數 -1所有都需要
retries = 3 // 表示重試次數
request.timeout.ms = 3000 //等待應答超時時間
enable.idempotence = true //開啓冪等性
public class KafkaProducerDemo {
    public static void main(String[] args) {
        //1.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");

        props.put(ProducerConfig.ACKS_CONFIG,"all");//等待所有從機應答
        props.put(ProducerConfig.RETRIES_CONFIG,3);//重試3次
        props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,3000);//等待3s應答
        props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG,true);//開啓冪等性

        //2.創建Kafka生產者
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);

        //3.構建ProducerRecord
        for (int i=15;i<20;i++){
            DecimalFormat decimalFormat = new DecimalFormat("000");
            User user = new User(i, "name" + i, i % 2 == 0);
            ProducerRecord<String, String> record = new ProducerRecord<String, String>("topic01", decimalFormat.format(i), "user"+i);
            //4.發送消息
            producer.send(record);
        }
        //5.清空緩衝區
        producer.flush();
        //6.關閉生產者
        producer.close();
    }

生產者批量發送

生產者會嘗試緩衝record,實現批量發送,通過一下配置控制發送時機,記住如果開啓可batch,一定在關閉producer之前需要flush。

batch.size = 16384 //16KB 緩衝16kb數據本地
linger.ms = 2000 //默認逗留時間
public static void main(String[] args) {
    //1.配置生產者了連接屬性
    Properties props = new Properties();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");

    props.put(ProducerConfig.ACKS_CONFIG,"all");
    props.put(ProducerConfig.RETRIES_CONFIG,3);
    props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,3000);
    props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG,true);

    props.put(ProducerConfig.BATCH_SIZE_CONFIG,1024);//1kb緩衝區
    props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//設置逗留時常


    //2.創建Kafka生產者
    KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);

    //3.構建ProducerRecord
    for (int i=15;i<20;i++){
        DecimalFormat decimalFormat = new DecimalFormat("000");
        User user = new User(i, "name" + i, i % 2 == 0);
        ProducerRecord<String, String> record = new ProducerRecord<String, String>("topic01", decimalFormat.format(i), "user"+i);
        //4.發送消息
        producer.send(record);
    }
    //5.清空緩衝區
    producer.flush();
    //6.關閉生產者
    producer.close();
}

生產者事務

kafka生產者事務指的是在發送多個數據的時候,保證多個Record記錄發送的原子性。如果有一條發送失敗就回退,但是需要注意在使用kafka事務的時候需要調整消費者的事務隔離級別設置爲read_committed因爲kafka默認的事務隔離策略是read_uncommitted

開啓事務

transactional.id=transaction-1 //必須保證唯一
enable.idempotence=true //開啓kafka的冪等性

只有生產者

public class KafkaProducerDemo {
    public static void main(String[] args) {

        //1.創建Kafka生產者
        KafkaProducer<String, String> producer = buildKafkaProducer();

        //2.初始化事務和開啓事務
        producer.initTransactions();
        producer.beginTransaction();
        try {
            for (int i=5;i<10;i++){
                DecimalFormat decimalFormat = new DecimalFormat("000");
                User user = new User(i, "name" + i, i % 2 == 0);
                ProducerRecord<String, String> record = new ProducerRecord<String, String>("topic07", decimalFormat.format(i), "user"+i);
                producer.send(record);
            }
            producer.flush();
            //3.提交事務]
            producer.commitTransaction();
        } catch (Exception e) {
            System.err.println(e.getMessage());
            //終止事務
            producer.abortTransaction();
        }
        //5.關閉生產者
        producer.close();
    }

    private static KafkaProducer<String, String> buildKafkaProducer() {
        //0.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");

        props.put(ProducerConfig.ACKS_CONFIG,"all");
        props.put(ProducerConfig.RETRIES_CONFIG,3);
        props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,3000);
        props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG,true);

        props.put(ProducerConfig.BATCH_SIZE_CONFIG,1024);//1kb緩衝區
        props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//設置逗留時常

        //開啓事務
        props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,"transaction-"+UUID.randomUUID().toString());
        return new KafkaProducer<String, String>(props);
    }
}

消費者那方需要將事務隔離級別設置爲`read_committed

public class KafkaConsumerDemo {
    public static void main(String[] args) {

        //1.創建Kafka消費者
        KafkaConsumer<String, String> consumer = buildKafkaConsumer();

        //2.訂閱topics
        consumer.subscribe(Arrays.asList("topic07"));
        //3.死循環讀取消息
        while(true){
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
            if(records!=null && !records.isEmpty()){
                for (ConsumerRecord<String, String> record : records) {
                    int partition = record.partition();
                    long offset = record.offset();
                    long timestamp = record.timestamp();
                    String key = record.key();
                    String value = record.value();
                    System.out.println(partition+"\t"+offset+"\t"+timestamp+"\t"+key+"\t"+value);
                }
            }
        }
    }

    private static KafkaConsumer<String, String> buildKafkaConsumer() {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.GROUP_ID_CONFIG,"group1");
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
        props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG,"read_committed");
        return new KafkaConsumer<String, String>(props);
    }
}

生產者&消費者

package com.msk.demo08;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serializer;

import java.util.Properties;
import java.util.UUID;

public class KafkaUtils {
    public static KafkaConsumer<String, String> buildKafkaConsumer(String servers, Class<? extends Deserializer> keyDeserializer,
                                                                   Class<? extends Deserializer> valueDeserializer,String group) {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,servers);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,keyDeserializer);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,valueDeserializer);
        props.put(ConsumerConfig.GROUP_ID_CONFIG,group);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
        props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG,"read_committed");
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);//設置爲手動提交
        return new KafkaConsumer<String, String>(props);
    }
    public static KafkaProducer<String, String> buildKafkaProducer(String servers, Class<? extends Serializer> keySerializer,
                                                                   Class<? extends Serializer> valueSerializer) {
        //1.配置生產者了連接屬性
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,servers);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,keySerializer);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,valueSerializer);

        props.put(ProducerConfig.ACKS_CONFIG,"all");
        props.put(ProducerConfig.RETRIES_CONFIG,3);
        props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,3000);
        props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG,true);

        props.put(ProducerConfig.BATCH_SIZE_CONFIG,1024);//1kb緩衝區
        props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//設置逗留時常

        //開啓事務
        props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,"transaction-"+ UUID.randomUUID().toString());
        return new KafkaProducer<String, String>(props);
    }
}

KafkaProducerAndConsumer

package com.msk.demo08;

import com.msk.demo05.User;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;

import java.text.DecimalFormat;
import java.time.Duration;
import java.util.*;

public class KafkaProducerAndConsumer {
    public static void main(String[] args) {

        String servers = "CentOSA:9092,CentOSB:9092,CentOSC:9092";
        String group="g1";
        //1.創建Kafka生產者
        KafkaProducer<String, String> producer = KafkaUtils.buildKafkaProducer(servers,
                StringSerializer.class, StringSerializer.class);
        KafkaConsumer<String, String> consumer = KafkaUtils.buildKafkaConsumer(servers,
                StringDeserializer.class, StringDeserializer.class,group);

        consumer.subscribe(Arrays.asList("topic08"));
        //初始化事務
        producer.initTransactions();

        while (true) {
            producer.beginTransaction();
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
            try {
                Map<TopicPartition, OffsetAndMetadata> commits = new HashMap<TopicPartition, OffsetAndMetadata>();
                for (ConsumerRecord<String, String> record : records) {
                    TopicPartition partition = new TopicPartition(record.topic(), record.partition());
                    OffsetAndMetadata offsetAndMetadata = new OffsetAndMetadata(record.offset() + 1);
                    commits.put(partition, offsetAndMetadata);

                    System.out.println(record);

                    ProducerRecord<String, String> srecord = new ProducerRecord<String, String>("topic09", record.key(), record.value());
                    producer.send(srecord);
                }
                producer.flush();

                //並沒使用 consumer提交,而是使用producer幫助消費者提交偏移量
                producer.sendOffsetsToTransaction(commits,group);
                //提交生產者的偏移量
                producer.commitTransaction();
            } catch (Exception e) {
                //System.err.println(e.getMessage());
                producer.abortTransaction();
            }
        }
    }
}

SpringBoot整合Kafka

  • pom.xml
<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <java.version>1.8</java.version>
    <kafka.version>2.2.0</kafka.version>
</properties>

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.1.5.RELEASE</version>
</parent>

<dependencies>

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.2.5.RELEASE</version>
    </dependency>
    <!-- kafka client處理 -->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>${kafka.version}</version>
    </dependency>
</dependencies>
  • application.properties
server.port=8888

# 生產者
spring.kafka.producer.bootstrap-servers=CentOSA:9092,CentOSB:9092,CentOSC:9092
spring.kafka.producer.acks=all
spring.kafka.producer.retries=1
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

# 消費者
spring.kafka.consumer.bootstrap-servers=CentOSA:9092,CentOSB:9092,CentOSC:9092
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
  • 代碼
@SpringBootApplication
@EnableScheduling
public class KafkaApplicationDemo {
    @Autowired
    private KafkaTemplate kafkaTemplate;

    public static void main(String[] args) {
        SpringApplication.run(KafkaApplicationDemo.class,args);
    }
    @Scheduled(cron = "0/1 * * * * ?")
    public void send(){
        String[] message=new String[]{"this is a demo","hello world","hello boy"};
        ListenableFuture future = kafkaTemplate.send("topic07", message[new Random().nextInt(message.length)]);
        future.addCallback(o -> System.out.println("send-消息發送成功:" + message), throwable -> System.out.println("消息發送失敗:" + message));
    }

    @KafkaListener(topics = "topic07",id="g1")
    public void processMessage(ConsumerRecord<?, ?> record) {
        System.out.println("record:"+record);
    }
}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章