SpringBoot基礎(五、整合Kafka及原生api使用)

目錄

Kafka原生API使用

創建生產者

利用生產者發送消息 :無腦發模式

利用生產者發送消息 :同步發送

利用生產者發送消息 :異步發送

利用生產者發送消息 :異步發送,並使用自定義分區分配器

創建消費者  配置信息

利用消費者消費信息:自動提交位移

利用消費者消費信息:手動提交位移

利用消費者消費信息:手動異步提交當前位移

利用消費者消費信息:手動異步提交位移帶回測

利用消費者消費信息:混合同步與異步提交位移

spring boot 整合 Kafka

配置spring kafka

啓動類

利用生成者發送消息


Kafka是一款高性能、極穩定的消息隊列,介紹的什麼就不扯了。直接看百度就好了。

Kafka安裝,以及啓動zookeeper和Kafka,創建topic可以看下面這篇文章  (請注意創建topic
騰訊雲服務器Linux系統--安裝Kafka

Kafka原生API使用

引入maven座標

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.11</artifactId>
    <version>2.3.0</version>
</dependency>

這裏要注意引入的jar包與服務器安裝的版本要對應,或者去官網查一下。官網連接

創建生產者

private static KafkaProducer<String, String> producer;

    static {

        Properties properties = new Properties();
        properties.put("bootstrap.servers", "VM_0_16_centos:9092");    //注意 用戶名或者IP都可
        properties.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
/*        //自定義分區分配器  ,是一種發送到哪個分區的自定義做法,後面會介紹,這裏註釋
        properties.put("partitioner.class",
                "kafkastudy.CustomPartitioner");*/

        producer = new KafkaProducer<>(properties);
    }

利用生產者發送消息 :無腦發模式

    private static void sendMessageForgetResult() {
        ProducerRecord<String, String> record = new ProducerRecord<>(
                "csdn_test", "name", "ForgetResult"); //"name"也可以對value的補充說明
        producer.send(record);
        producer.close(); }

只管發送, 不管結果: 只調用接口發送消息到 Kafka 服務器, 但不管成功寫入與否。 由於 Kafka 是高可用的, 因此大部分情 況下消息都會寫入, 但在異常情況下會丟消息。

利用生產者發送消息 :同步發送

private static void sendMessageSync() throws Exception {
        ProducerRecord<String, String> record = new ProducerRecord<>(
                "csdn_test", "name", "sync");
        RecordMetadata result = producer.send(record).get();
        System.out.println(result.topic());   //打印返回消息,隊列名
        System.out.println(result.partition());//發送到第幾個分區位置
        System.out.println(result.offset());   //這是第幾個發送到topic中的消息

        producer.close();}

同 步發送: 調用 send() 方法返回一個 Future 對象, 我們可以使用它的 get() 方法來判斷消息發送成功與否。

利用生產者發送消息 :異步發送

    private static void sendMessageCallbacktwo() {

        ProducerRecord<String, String> record = new ProducerRecord<>(
                "csdn_test", "name", "callback"
        );
        producer.send(record, (recordMetadata,e)->{
            if (e != null) {e.printStackTrace();return; }
            System.out.println(recordMetadata.topic());
            System.out.println(recordMetadata.partition());
            System.out.println(recordMetadata.offset());
        }); //詳細自己點進去看看
        producer.close(); }

異 步發送: 調用 send() 時提供一個回調方法, 當接收到 broker 結果後回調此方法。

利用生產者發送消息 :異步發送,並使用自定義分區分配器

1.Kafka創建topic時,要設置多個分區

2.實現partitioner接口的partition方法

public class CustomPartitioner implements Partitioner {
    @Override
    public int partition(String topic,Object key, byte[] keyBytes,
                         Object value, byte[] valueBytes, Cluster cluster) {
        List<PartitionInfo> partitionInfos = cluster.partitionsForTopic(topic);
        int numPartitions = partitionInfos.size();      
        //判斷判斷kafka是否有多個分區,是否有key,我們要根據key進行分區分配
        if (null == keyBytes || !(key instanceof String)) throw new InvalidRecordException("kafka message must have key");
        if (numPartitions == 1) return 0;     
        if (key.equals("name")) return numPartitions - 1;    //key爲"name"的分配最後一個
        return Math.abs(Utils.murmur2(keyBytes)) % (numPartitions - 1);} //其他隨機分配

3.創建KafkaProducer對象時,添加自定義分區分配器

      //自定義分區分配器
        properties.put("partitioner.class",
                "kafkastudy.CustomPartitioner");

4.異步發送

    private static void sendMessageCallback() {
        ProducerRecord<String, String> record = new ProducerRecord<>(
             "csdn_test", "name", "callback");
        producer.send(record, new MyProducerCallback());
        record = new ProducerRecord<>("csdn_test", "name-x", "callback");
        producer.send(record, new MyProducerCallback());
        record = new ProducerRecord<>("csdn_test", "name-y", "callback");
        producer.send(record, new MyProducerCallback());
        producer.close(); }
    //配置異步消息
    private static class MyProducerCallback implements Callback {
        @Override
        public void onCompletion(RecordMetadata recordMetadata, Exception e) {
            if (e != null) { e.printStackTrace();return;    }
            System.out.println(recordMetadata.topic());
            System.out.println(recordMetadata.partition());
            System.out.println(recordMetadata.offset());    }   }

創建消費者  配置信息

    private static KafkaConsumer consumer;
    private static Properties properties;
    static {
        properties = new Properties();
        properties.put("bootstrap.servers", "VM_0_16_centos:9092");
        properties.put("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put("group.id", "KafkaStudy");  //組id,標識這是一個消費者,Kafka就會向它投遞  }

因爲消費消息 要提交位移信息,這個位移信息是需要寫入配置文件的,所以不能靜態方法生成了。(主要爲了方便(懶))。

利用消費者消費信息:自動提交位移

private static void generalConsmerMessageAutoCommit() {
    properties.put("enable.auto.commit", true);
    consumer = new KafkaConsumer(properties);
    consumer.subscribe(Collections.singleton("csdn_test"));
    while (true) {
        AtomicBoolean flag = new AtomicBoolean(true);
        ConsumerRecords<String, String> records = consumer.poll(5);
        records.forEach(x -> {
            System.out.println(String.format("topic = %s,partition = %s, key = %s, value = %s ",
                    x.topic(), x.partition(), x.key(), x.value()));
//                if (x.value().equals("done")) {
            if ("done".equals(x.value())) {
                flag.set(false);
            }
        });
        if (!flag.get()) {
            break;
        }
    }
}

利用消費者消費信息:手動提交位移

private static void generalConsumeMessageSyncCommit() {

    properties.put("auto.commit.offset", false);
    consumer = new KafkaConsumer<>(properties);
    consumer.subscribe(Collections.singletonList("csdn_test"));

    while (true) {
        boolean flag = true;

        ConsumerRecords<String, String> records = consumer.poll(5);
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(String.format(
                    "topic = %s, partition = %s, key = %s, value = %s",
                    record.topic(), record.partition(),
                    record.key(), record.value()
            ));
            if ("done".equals(record.value())) {
                flag = false;
            }
        }

        try {
            consumer.commitSync();
        } catch (CommitFailedException ex) {
            System.out.println("commit failed error: "
                    + ex.getMessage());
        }

        if (!flag) {
            break;
        }
    }
}

利用消費者消費信息:手動異步提交當前位移

private static void generalConsumeMessageAsyncCommit() {

        properties.put("auto.commit.offset", false);
        consumer = new KafkaConsumer<>(properties);
        consumer.subscribe(Collections.singletonList("csdn_test"));

        while (true) {
            boolean flag = true;

            ConsumerRecords<String, String> records = consumer.poll(5);
            for (ConsumerRecord<String, String> record : records) {
                System.out.println(String.format(
                        "topic = %s, partition = %s, key = %s, value = %s",
                        record.topic(), record.partition(),
                        record.key(), record.value()
                ));
                if ("done".equals(record.value())) {
                    flag = false;
                }
            }

            consumer.commitAsync();

            if (!flag) {
                break;
            }
        }
    }

利用消費者消費信息:手動異步提交位移帶回測

private static void generalConsumeMessageAsyncCommitWithCallback() {

    properties.put("auto.commit.offset", false);
    consumer = new KafkaConsumer<>(properties);
    consumer.subscribe(Collections.singletonList("csdn_test"));

    while (true) {
        boolean flag = true;

        ConsumerRecords<String, String> records = consumer.poll(5);
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(String.format(
                    "topic = %s, partition = %s, key = %s, value = %s",
                    record.topic(), record.partition(),
                    record.key(), record.value()
            ));
            if ("done".equals(record.value())) {
                flag = false;
            }

            consumer.commitAsync((map, e) -> {
                if (e != null) {
                    System.out.println("commit failed for offsets: " +
                            e.getMessage());
                }
            });

            if (!flag) {
                break;
            }
        }
    }
}

利用消費者消費信息:混合同步與異步提交位移

private static void mixSyncAndAsyncCommit() {

        properties.put("auto.commit.offset", false);
        consumer = new KafkaConsumer<>(properties);
        consumer.subscribe(Collections.singletonList("csdn_test"));
        try {
            while (true) {
                ConsumerRecords<String, String> records =
                        consumer.poll(100);

                for (ConsumerRecord<String, String> record : records) {
                    System.out.println(String.format(
                            "topic = %s, partition = %s, key = %s, " + "value = %s",
                            record.topic(), record.partition(),
                            record.key(), record.value()
                    ));
                }
                consumer.commitAsync();
            }
        } catch (Exception ex) {
            System.out.println("commit async error: " + ex.getMessage());
        } finally {
            try {
                consumer.commitSync();
            } finally {
                consumer.close();
            }
        }
    }

spring boot 整合 Kafka

引入maven 

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.2.RELEASE</version>
</parent>

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
</dependency>

jar包引入要注意版本!!!

配置spring kafka

spring:
  kafka:
    bootstrap-servers: VM_0_16_centos:9092
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer:
      group-id: test
      enable-auto-commit: true
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

啓動類

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args); }}

利用生成者發送消息

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class},webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class SpringKafkaTest {
    @Autowired
    private KafkaTemplate<String,String> kafkaTemplate;

    @Test
    public void ProducerTest() {
        kafkaTemplate.send("csdn_test", "name", "Springkafka");
    }
}

spring boot 整合Kafka只介紹一個例子 ,不像原生api 要寫很多代碼,在spring boot的配置文件中添加一些配置就搞定了,這個更多去查看文檔。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章