Seata學習筆記二:Spring Boot + Dubbo下AT模式的嘗試性使用

Seata官方提供的有專門的各種場景的demo源碼,有興趣的可以自己拉下來嘗試一下。由於本人日常的工作環境是Spring Boot + Dubbo + mysql,出於實用性以及自己動手實踐方面的考慮,我沒有去跑官方的demo,而是選擇嘗試以日常項目環境爲基礎搭建一套demo來熟悉Seata各種場景的使用,並期望在此過程中發現並解決各種可能出現的問題。

首先,最簡單的,接入AT模式。

工具及環境準備

IDEA-2019,Spring Boot-2.3.0.RELEASE, JDK-8, Dubbo-2.7.1, maven-3.6.2, mysql-8.0.20。

官方用例

用戶購買商品的業務邏輯。整個業務邏輯由3個微服務提供支持:

  • 倉儲服務:對給定的商品扣除倉儲數量。
  • 訂單服務:根據採購需求創建訂單。
  • 帳戶服務:從用戶帳戶中扣除餘額。

架構圖

 

倉儲服務

public interface StorageService {

    /**
     * 扣除存儲數量
     */
    void deduct(String commodityCode, int count);
}

訂單服務

public interface OrderService {

    /**
     * 創建訂單
     */
    Order create(String userId, String commodityCode, int orderCount);
}

帳戶服務

public interface AccountService {

    /**
     * 從用戶賬戶中借出
     */
    void debit(String userId, int money);
}

主要業務邏輯

public class BusinessServiceImpl implements BusinessService {

    private StorageService storageService;

    private OrderService orderService;

    /**
     * 採購
     */
    public void purchase(String userId, String commodityCode, int orderCount) {

        storageService.deduct(commodityCode, orderCount);

        orderService.create(userId, commodityCode, orderCount);
    }
}
public class OrderServiceImpl implements OrderService {

    private OrderDAO orderDAO;

    private AccountService accountService;

    public Order create(String userId, String commodityCode, int orderCount) {

        int orderMoney = calculate(commodityCode, orderCount);

        accountService.debit(userId, orderMoney);

        Order order = new Order();
        order.userId = userId;
        order.commodityCode = commodityCode;
        order.count = orderCount;
        order.money = orderMoney;

        // INSERT INTO orders ...
        return orderDAO.insert(order);
    }
}

SEATA 的分佈式交易解決方案

我們只需要使用一個 @GlobalTransactional 註解在業務方法上:


    @GlobalTransactional
    public void purchase(String userId, String commodityCode, int orderCount) {
        ......
    }

數據庫準備

分別創建數據庫order, account, storage,並建立對應的業務表及undo_log表。(sql見附件)

服務搭建

1、新建Spring Boot POM項目zhengcs-seata

2、新建Module zhengcs-seata-account(Spring Boot 項目)

將zhengcs-seata-account轉爲zhengcs-seata的子模塊

搭建一個完整的基於Spring boot + dubbo + mysql 的maven應用

1)配置pom文件

<!--dubbo-->
		<dependency>
			<groupId>org.apache.dubbo</groupId>
			<artifactId>dubbo</artifactId>
			<version>2.7.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.dubbo</groupId>
			<artifactId>dubbo-spring-boot-starter</artifactId>
			<version>2.7.1</version>
		</dependency>

		<!--zk-->
		<dependency>
			<groupId>org.apache.curator</groupId>
			<artifactId>curator-framework</artifactId>
			<version>2.13.0</version>
		</dependency>
		<dependency>
			<groupId>org.apache.curator</groupId>
			<artifactId>curator-recipes</artifactId>
			<version>2.13.0</version>
		</dependency>

		<!--DB-->
		<dependency>
			<groupId>com.alibaba</groupId>
			<artifactId>druid</artifactId>
			<version>1.1.10</version>
		</dependency>
		<dependency>
			<groupId>org.mybatis.spring.boot</groupId>
			<artifactId>mybatis-spring-boot-starter</artifactId>
			<version>1.3.2</version>
		</dependency>
		<dependency>
			<groupId>mysql</groupId>
			<artifactId>mysql-connector-java</artifactId>
			<scope>runtime</scope>
		</dependency>

2)配置application.yml

server:
  port: 8083
spring:
  application:
    name: zhengcs-seata-account
  datasource:
    type: com.alibaba.druid.pool.DruidDataSource
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/account
    username: test
    password: 123456
    filters: stat,slf4j
    maxActive: 5
    maxWait: 60000
    minIdle: 1
    initialSize: 1
    timeBetweenEvictionRunsMillis: 60000
    minEvictableIdleTimeMillis: 300000
    validationQuery: select 1
    testWhileIdle: true
    testOnBorrow: false
    testOnReturn: false
    poolPreparedStatements: true
    maxOpenPreparedStatements: 20

dubbo:
  application:
    name: zhengcs-seata-account
  protocol:
    name: dubbo
    port: 20883
  registry:
    address: N/A
    check: false
  consumer:
    check: false
    timeout: 10000

此處爲了減小成本,並沒有搭建zk環境,直接使用本地直連的方式進行dubbo rpc調用,registry.address設置爲N/A。

3)生成account表對應的mapper, xml, service, entity等

此處可以以自己習慣的方式去實現,簡單起見,直接從官方demo源碼中copy過來即可。

4) 配置DBConfig

@Configuration
public class DBConfig {

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSource dataSource(){
        return new DruidDataSource();
    }


    @Bean
    public SqlSessionFactory sqlSessionFactory(DataSource dataSource) throws Exception{
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(dataSource);
        factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath*:/mapper/*.xml"));
        factoryBean.setConfigLocation(new ClassPathResource("mybatis-configuration.xml"));
        return factoryBean.getObject();
    }

    @Bean
    public DataSourceTransactionManager dataSourceTransactionManager(DataSource dataSource){
        return new DataSourceTransactionManager(dataSource);
    }

}

5) 註冊dubbo接口

此處爲了對dubbo接口統一處理,提供一個Maven module zhengcs-seata-interface來統一管理項目中dubbo接口的定義。

account項目中實現DubboAccountService接口

6) 配置啓動項

至此,一個完整的Spring boot dubbo項目搭建完成了,可以在test中測試一下,看下項目是否可以正常運行。

接入Seata AT

接入Seata AT是今天的重頭戲,seata針對不同的環境提供有不同的接入方式,不過比較坑的是seata提供的demo源碼中各種情景太多,然而又沒有一些比較詳細的文檔說明,需要自己去demo中自己看,自己去總結。另外,seata 有各種參數,特別是註冊和配置支持多種第三方框架,作爲演示或者說上手demo來說,一切儘量從簡,先追求把架子搭起來,服務跑起來,再考慮在這個基礎上去引入更高層的東西。

1)啓動TC-sever

Usage: sh seata-server.sh(for linux and mac) or cmd seata-server.bat(for windows) [options]
  Options:
    --host, -h
      The host to bind.
      Default: 0.0.0.0
    --port, -p
      The port to listen.
      Default: 8091
    --storeMode, -m
      log store mode : file、db
      Default: file
    --help

e.g.

sh seata-server.sh -p 8091 -h 127.0.0.1 -m file

 

此時不需要考慮服務端的一些參數配置,直接使用默認配置啓動即可,先關注客戶端的使用。

2)服務引入seata

針對Spring boot 主要有兩種引入方式----seata-all和seata-spring-boot-starter,分別介紹。

seata-all:

        <!--seata-->
		<dependency>
			<groupId>io.seata</groupId>
			<artifactId>seata-all</artifactId>
			<version>1.2.0</version>
		</dependency>

		<!--jackson-->
		<dependency>
			<groupId>com.fasterxml.jackson.core</groupId>
			<artifactId>jackson-databind</artifactId>
			<version>2.11.0</version>
		</dependency>

seata-all是seata提供的傳統的服務引入方式,需要配合使用conf配置文件。registry.conf是seata的配置文件入口,配置信息如下:

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "file"

  nacos {
    application = "seata-server"
    serverAddr = "localhost"
    namespace = ""
    username = ""
    password = ""
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = "0"
    password = ""
    timeout = "0"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3、springCloudConfig
  type = "file"

  nacos {
    serverAddr = "localhost"
    namespace = ""
    group = "SEATA_GROUP"
    username = ""
    password = ""
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    appId = "seata-server"
    apolloMeta = "http://192.168.1.204:8801"
    namespace = "application"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}

其中比較重要的配置是registry.type和config.type。這兩個參數分表指定了註冊中心和配置中心的類型。客服端和服務端的配置要保持一致,這裏都選擇默認的file。關於具體的各個參數的含義,可以參考官方說明文檔

當類型選擇file時,關注file.name參數,這裏指向的是file.conf,所以還需要一個file.conf文件。file.conf文件主要配置三個方面的內容:

  • transport transport 部分的配置對應 NettyServerConfig 類,用於定義 Netty 相關的參數,TM、RM 與 seata-server 之間使用 Netty 進行通信。
transport {
  # tcp udt unix-domain-socket
  type = "TCP"
  #NIO NATIVE
  server = "NIO"
  #enable heartbeat
  heartbeat = true
  #thread factory for netty
  thread-factory {
    boss-thread-prefix = "NettyBoss"
    worker-thread-prefix = "NettyServerNIOWorker"
    server-executor-thread-prefix = "NettyServerBizHandler"
    share-boss-worker = false
    client-selector-thread-prefix = "NettyClientSelector"
    client-selector-thread-size = 1
    client-worker-thread-prefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    boss-thread-size = 1
    #auto default pin or 8
    worker-thread-size = 8
  }
  shutdown {
    # when destroy server, wait seconds
    wait = 3
  }
  serialization = "seata"
  compressor = "none"
}
  • service
service {
  #vgroup->rgroup
  vgroup_mapping.my_test_tx_group = "default"
  #only support single node
  #配置Client連接TC的地址
  default.grouplist = "127.0.0.1:8091"
  #degrade current not support
  enableDegrade = false
  #disable
  disable = false
  #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
  max.commit.retry.timeout = "-1"
  max.rollback.retry.timeout = "-1"
}
  • client
client {
# RM接收TC的commit通知後緩衝上限
  async.commit.buffer.limit = 10000
  lock {
    retry.internal = 10
    retry.times = 30
  }
  report.retry.count = 5
  tm.commit.retry.count = 1
  tm.rollback.retry.count = 1
}

配置數據源代理

seata AT的運行機制是通過JDBC數據源代理進行業務sql解析並生成對應的undo_log,因此需要配置代理數據源。

@Bean
    public DataSourceProxy dataSourceProxy(DataSource dataSource){
        return new DataSourceProxy(dataSource);
    }

    @Bean
    public SqlSessionFactory sqlSessionFactory(DataSourceProxy dataSource) throws Exception{
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(dataSource);
        factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath*:/mapper/*.xml"));
        factoryBean.setConfigLocation(new ClassPathResource("mybatis-configuration.xml"));
        return factoryBean.getObject();
    }

在DBConfig配置類中新增DataSourceProxy代理dataSource,並將其實例注入到SqlSessionFactory實例中。

配置全局事務掃描器GlobalTransactionScanner

@Bean
    public GlobalTransactionScanner globalTransactionScanner(){
        return new GlobalTransactionScanner("zhengcs-seata-account", "my_test_tx_group");
    }

GlobalTransactionScanner是seata的配置入口,是客戶端啓動類,TM,RM的初始化操作都在該類中,有興趣的同學可以看下源碼。GlobalTransactionScanner中的兩個參數分表代表應用ID和事務分組,這裏的事務分組要和file.conf文件中的service.vgroup_mapping的下級參數名稱保持一致,若不配置,默認獲取屬性spring.application.name的值+"-fescar-service-group"。拿到事務分組名"my_test_tx_group"後拼接成"service.vgroupMapping.my_test_tx_group"可以查找到對應的TC集羣名,然後根據TC集羣名拼接"service."+clusterName+".grouplist"找到真實TC服務地址。

上述一系列操作相對而言有些複雜,配置文件化與我們平時直接在application.yml中進行配置的習慣不太符合,那麼seata只是支持直接在application.yml中配置呢?答案是肯定的,這就是seata-spring-boot-starter的作用。

seata-spring-boot-starter配置

seata-spring-boot-starter是seata 1.0版本之後新增加的,支持全自動配置seata與spring-boot的集成,包括數據源的自動代理以及GlobalTransactionScanner初始化。

        <dependency>
			<groupId>io.seata</groupId>
			<artifactId>seata-spring-boot-starter</artifactId>
			<version>1.2.0</version>
		</dependency>

配置application.yml

seata:
  enabled: true
  application-id: account-service
  tx-service-group: my_test_tx_group
  #enable-auto-data-source-proxy: true
  #use-jdk-proxy: false
  client:
    rm:
      async-commit-buffer-limit: 1000
      report-retry-count: 5
      table-meta-check-enable: false
      report-success-enable: false
      lock:
        retry-interval: 10
        retry-times: 30
        retry-policy-branch-rollback-on-conflict: true
    tm:
      commit-retry-count: 5
      rollback-retry-count: 5
    undo:
      data-validation: true
      log-serialization: jackson
      log-table: undo_log
    log:
      exceptionRate: 100
  service:
    vgroup-mapping:
      my_test_tx_group: default
    default:
      grouplist: 127.0.0.1:8091
    #enable-degrade: false
    #disable-global-transaction: false
  transport:
    shutdown:
      wait: 3
    thread-factory:
      boss-thread-prefix: NettyBoss
      worker-thread-prefix: NettyServerNIOWorker
      server-executor-thread-prefix: NettyServerBizHandler
      share-boss-worker: false
      client-selector-thread-prefix: NettyClientSelector
      client-selector-thread-size: 1
      client-worker-thread-prefix: NettyClientWorkerThread
      worker-thread-size: default
      boss-thread-size: 1
    type: TCP
    server: NIO
    heartbeat: true
    serialization: seata
    compressor: none
    enable-client-batch-send-request: true
  config:
    type: file
  registry:
    type: file

只需要上面兩步seata就配置OK了。

3)啓用全局事務

通過註解@GlobalTransactional啓用全局事務

本地啓動:

對於一個服務既可以是 TM 角色也可以是 RM 角色,至於什麼時候是 TM 或者 RM 則要看在一次全局事務中@GlobalTransactional註解標註在哪。

3、參考上面流程新建Module zhengcs-seata-order(Spring Boot 項目)

4、參考上面流程新建Module zhengcs-seata-storage(Spring Boot 項目)

5、新建Module zhengcs-seata-busi(Spring Boot 項目)

zhengcs-seata-busi作爲對外提供服務,模擬下單過程。

@Service
@Slf4j
public class BusiService {

    @Reference(url = "dubbo://localhost:20882", check = false)
    private DubboStorageService dubboStorageService;
    @Reference(url = "dubbo://localhost:20881", check = false)
    private DubboOrderService dubboOrderService;

    /**
     * 減庫存,下訂單
     *
     * @param userId
     * @param commodityCode
     * @param orderCount
     */
    @GlobalTransactional(name = "purchase")
    public void purchase(String userId, String commodityCode, int orderCount) {
        log.info("purchase begin ... xid: " + RootContext.getXID());

        StorageRequest storageRequest = StorageRequest.builder()
                .commodityCode(commodityCode)
                .count(orderCount)
                .build();
        Result<Boolean> storageResult = dubboStorageService.decreaseStorage(storageRequest);
        log.info("庫存扣減結果:{}", JSON.toJSONString(storageResult));
        if(!storageResult.isSuccess()){
            throw new RuntimeException("庫存扣減異常");
        }

        OrderRequest orderRequest = OrderRequest.builder()
                .userId(userId)
                .commodityCode(commodityCode)
                .count(orderCount)
                .build();
        Result orderResult = dubboOrderService.createOrder(orderRequest);
        log.info("訂單創建結果:{}", JSON.toJSONString(orderResult));
        if(!orderResult.isSuccess()){
            throw new RuntimeException("訂單創建異常");
        }

        log.info("事務ID[{}],下單成功", RootContext.getXID());
    }
}

6、啓動服務,模擬下單

依次啓動zhengcs-seata-account, zhengcs-seata-storage, zhengcs-seata-order, zhengcs-seata-busi服務。

瀏覽器請求http://localhost:8080/purchase?userId=001&code=123&count=1  ---> 下單成功

追蹤一下過程:

zhengcs-seata-busi:

zhengcs-seata-storage:

zhengcs-seata-order:

zhengcs-seata-acount:

瀏覽器請求http://localhost:8080/purchase?userId=001&code=123&count=100  ---> 下單失敗

流程追蹤:

zhengcs-seata-busi:

庫存扣減成功,但是下單失敗,失敗原因是金額扣減失敗。

zhengcs-seata-storage:

zhengcs-seata-order:

zhengcs-seata-account:

至此,基本項目搭建完成,基本的AT模式可以正常運行。

尾語

在整個項目搭建及運行過程中,有很多問題和疑惑,有的已經在實踐中得以解決,有的還沒有完全釐清或者還沒有來得及深入瞭解,此處記錄下幾個比較深刻的問題,後面繼續深入學習。

1、分組在整個設計中的作用是什麼樣的?與集羣之間的關係如何?

2、註冊中心和配置中心怎麼接入第三方框架?

3、TC如何保證高可用?如何實現集羣部署?

4、TC停掉會對整個系統會產生什麼影響?seata對此的應對策略是什麼?

。。。

 

 

參考資料:

http://seata.io/zh-cn/docs/

https://github.com/seata/seata-samples

https://blog.csdn.net/qq_34936541/article/details/103274666?fps=1&locationNum=2

源碼地址:https://github.com/zcs20082015/zhengcs-seata

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章