debezium關於cdc的使用(下)

博文原址:debezium關於cdc的使用(下)

簡介

debezium在debezium關於cdc的使用(上)中有做介紹。具體可以跳到上文查看。本篇主要講述使用kafka connector方式來同步數據。而kafka connector實際上也有提供其他的sink(Kafka Connect JDBC)來同步數據,但是沒有delete事件。所以在這裏選擇了Debezium MySQL CDC Connector方式來同步。本文需要使用Avro方式序列化kafka數據。

流程

第一步準備

使用kafka消息中間介的話需要對應的服務支持,尤其需要chema-registry來管理schema,因電腦內存有限就沒使用docker方式啓動,如果條件ok內存夠大的話闊以使用docker方式。所以使用的就是local本地方式。具體下載,安裝,部署,配置環境變量我就不在重複描述了,闊以參考官方文檔。

第二步啓動kafka配套

進入目錄後啓動bin/confluent start

image-20190807100552882

第三步創建kafka topic

可以通過kafka命令創建topic也可以通過Confluent Control Center 地址:http://localhost:9021來創建topic。我們還是按照上文的表來同步數據,所以創建topic:dbserver1.inventory.demo

image-20190807103004882

第四步創建kafka connect

可以通過kafka rest命令創建也可以使用Confluent Control Center創建。

connect的api命令參考

方便點可以使用crul創建,以下爲配置文件

{
  "name": "inventory-connector",
  "config": {
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "tasks.max": "1",
    "database.hostname": "localhost",
    "database.port": "3306",
    "database.user": "debezium",
    "database.password": "dbz",
    "database.server.id": "184054",
    "database.server.name": "dbserver1",
    "database.whitelist": "inventory",
    "decimal.handling.mode": "double",
    "key.converter": "io.confluent.connect.avro.AvroConverter",
    "key.converter.schema.registry.url": "http://localhost:8081",
    "value.converter": "io.confluent.connect.avro.AvroConverter",
    "value.converter.schema.registry.url": "http://localhost:8081",
    "database.history.kafka.bootstrap.servers": "localhost:9092",
    "database.history.kafka.topic": "dbhistory.inventory"
  }
}

創建好後可以使用命令查詢到或者在管理中心查看。

命令:http://localhost:8083/connectors/inventory-connector

image-20190807105245306

第五步啓動同步程序

配置

spring:
  application:
    name: data-center
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://localhost:3306/inventory_back?useUnicode=true&characterEncoding=utf-8&useSSL=true&serverTimezone=UTC
    username: debe
    password: 123456
  jpa:
    show-sql: true
  jackson:
    date-format: yyyy-MM-dd HH:mm:ss
    time-zone: GMT+8
#    time-zone: UTC
  kafka:
    bootstrap-servers: localhost:9092
    consumer:
      group-id: debezium-kafka-connector
      key-deserializer: "io.confluent.kafka.serializers.KafkaAvroDeserializer"
      value-deserializer: "io.confluent.kafka.serializers.KafkaAvroDeserializer"
      properties:
        schema.registry.url: http://localhost:8081

kafka消費者

跟上文的處理流程是一樣的。只不過DDL和DML分成2個監聽器。

package com.example.kakfa.avro;

import com.example.kakfa.avro.sql.SqlProvider;
import com.example.kakfa.avro.sql.SqlProviderFactory;
import io.debezium.data.Envelope;
import lombok.extern.slf4j.Slf4j;
import org.apache.avro.generic.GenericData;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

import java.util.Objects;
import java.util.Optional;


@Slf4j
@Component
public class KafkaAvroConsumerRunner {

    @Autowired
    private JdbcTemplate jdbcTemplate;

    @Autowired
    private NamedParameterJdbcTemplate namedTemplate;

    @KafkaListener(id = "dbserver1-ddl-consumer", topics = "dbserver1")
    public void listenerUser(ConsumerRecord<GenericData.Record, GenericData.Record> record) throws Exception {
        GenericData.Record key = record.key();
        GenericData.Record value = record.value();
        log.info("Received record: {}", record);
        log.info("Received record: key {}", key);
        log.info("Received record: value {}", value);

        String databaseName = Optional.ofNullable(value.get("databaseName")).map(Object::toString).orElse(null);
        String ddl = Optional.ofNullable(value.get("ddl")).map(Object::toString).orElse(null);

        if (StringUtils.isBlank(ddl)) {
            return;
        }
        handleDDL(ddl, databaseName);
    }

    /**
     * 執行數據庫ddl語句
     *
     * @param ddl
     */
    private void handleDDL(String ddl, String db) {
        log.info("ddl語句 : {}", ddl);
        try {
            if (StringUtils.isNotBlank(db)) {
                ddl = ddl.replace(db + ".", "");
                ddl = ddl.replace("`" + db + "`.", "");
            }

            jdbcTemplate.execute(ddl);
        } catch (Exception e) {
            log.error("數據庫操作DDL語句失敗,", e);
        }
    }

    @KafkaListener(id = "dbserver1-dml-consumer", topicPattern = "dbserver1.inventory.*")
    public void listenerAvro(ConsumerRecord<GenericData.Record, GenericData.Record> record) throws Exception {
        GenericData.Record key = record.key();
        GenericData.Record value = record.value();
        log.info("Received record: {}", record);
        log.info("Received record: key {}", key);
        log.info("Received record: value {}", value);

        if (Objects.isNull(value)) {
            return;
        }

        GenericData.Record source = (GenericData.Record) value.get("source");
        String table = source.get("table").toString();
        Envelope.Operation operation = Envelope.Operation.forCode(value.get("op").toString());

        String db = source.get("db").toString();

        handleDML(key, value, table, operation);
    }

    private void handleDML(GenericData.Record key, GenericData.Record value,
                           String table, Envelope.Operation operation) {
        SqlProvider provider = SqlProviderFactory.getProvider(operation);
        if (Objects.isNull(provider)) {
            log.error("沒有找到sql處理器提供者.");
            return;
        }

        String sql = provider.getSql(key, value, table);
        if (StringUtils.isBlank(sql)) {
            log.error("找不到sql.");
            return;
        }

        try {
            log.info("dml語句 : {}", sql);
            namedTemplate.update(sql, provider.getSqlParameterMap());
        } catch (Exception e) {
            log.error("數據庫DML操作失敗,", e);
        }
    }

}

數據流程

剩下的就是在inventory庫中demo表中增刪改數據,在對應的inventory_back庫中demo表數據對應的改變。

歡迎關注微信公衆號
微信公衆號

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章