springcloud-nacos-seata 實現分佈式事務

分佈式事務組件seata的使用demo,AT模式,集成nacos、springboot、springcloud、mybatis-plus,數據庫採用mysql;

ps:github代碼:transaction_example

1. 服務端配置

1.1 Nacos-server

啓動命令(standalone代表着單機模式運行,非集羣模式):

cd bin

sh startup.sh -m standalone

ps:關閉的命令  sh shutdown.sh

提示信息:

nacos is starting with cluster
nacos is starting,you can check the /usr/local/nacos/logs/start.out

那麼可以查看nacos的啓動日誌:

cat /usr/local/nacos/logs/start.out

均無報錯,則啓動成功;

訪問:http://192.168.87.133:8848/nacos/index.html,登錄nacos圖形界面。賬號密碼都是nacos.

剛開始配置管理-配置列表:爲空,什麼也沒有。

1.2 Seata-server

1.2.1 修改conf/registry.conf 配置

ps:每個應用的resource裏需要配置一個registry.conf ,demo中與seata-server裏的配置相同

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }

}

config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"

  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }
}

1.2.2 修改conf/nacos-config.txt 配置

(ps:application.propeties 的各個配置項,注意spring.cloud.alibaba.seata.tx-service-group 是服務組名稱,與nacos-config.txt 配置的service.vgroup_mapping.${your-service-gruop}具有對應關係

demo中有兩個服務,分別是storage-service和order-service,完整配置如下:

transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.thread-factory.boss-thread-prefix=NettyBoss
transport.thread-factory.worker-thread-prefix=NettyServerNIOWorker
transport.thread-factory.server-executor-thread-prefix=NettyServerBizHandler
transport.thread-factory.share-boss-worker=false
transport.thread-factory.client-selector-thread-prefix=NettyClientSelector
transport.thread-factory.client-selector-thread-size=1
transport.thread-factory.client-worker-thread-prefix=NettyClientWorkerThread
transport.thread-factory.boss-thread-size=1
transport.thread-factory.worker-thread-size=8
transport.shutdown.wait=3
service.vgroup_mapping.storage-service-group=default
service.vgroup_mapping.order-service-group=default
service.enableDegrade=false
service.disable=false
service.max.commit.retry.timeout=-1
service.max.rollback.retry.timeout=-1
client.async.commit.buffer.limit=10000
client.lock.retry.internal=10
client.lock.retry.times=30
client.lock.retry.policy.branch-rollback-on-conflict=true
client.table.meta.check.enable=true
client.report.retry.count=5
client.tm.commit.retry.count=1
client.tm.rollback.retry.count=1
store.mode=db
store.file.dir=file_store/data
store.file.max-branch-session-size=16384
store.file.max-global-session-size=512
store.file.file-write-buffer-cache-size=16384
store.file.flush-disk-mode=async
store.file.session.reload.read_size=100
store.db.datasource=dbcp
store.db.db-type=mysql
store.db.driver-class-name=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
store.db.user=root
store.db.password=123456
store.db.min-conn=1
store.db.max-conn=3
store.db.global.table=global_table
store.db.branch.table=branch_table
store.db.query-limit=100
store.db.lock-table=lock_table
recovery.committing-retry-period=1000
recovery.asyn-committing-retry-period=1000
recovery.rollbacking-retry-period=1000
recovery.timeout-retry-period=1000
transaction.undo.data.validation=true
transaction.undo.log.serialization=jackson
transaction.undo.log.save.days=7
transaction.undo.log.delete.period=86400000
transaction.undo.log.table=undo_log
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registry-type=compact
metrics.exporter-list=prometheus
metrics.exporter-prometheus-port=9898
support.spring.datasource.autoproxy=false

1.3 初始化seata 的nacos配置 :

cd conf
sh nacos-config.sh 192.168.87.133

最後一行提示這個代碼配置推送到nacos成功了。

\r\n\033[42;37m init nacos config finished, please start seata-server. \033[0m

ps:conf/nacos-config.txt配置中不要加註釋,否則可能會出現init nacos config fail. 

如:我加了段註釋:(很坑的,之前一直以爲是我自己哪裏配錯了。)

結果最後一行就是這個初始化配置失敗;

然後去訪問http://192.168.87.133:8848/nacos/index.html,配置列表有數據了。

 

1.4 啓動seata-server

cd bin
sh seata-server.sh -p 8091 -m db
#or 
sh seata-server.sh
 

2. 應用配置

ps:使用的官方提供的demo,所以代碼就不展示了。

2.1 order-service

application.properties:

spring.application.name=order-service
server.port=9091

# Nacos 註冊中心地址
spring.cloud.nacos.discovery.server-addr = 192.168.87.133:8848

# seata 服務分組,要與服務端nacos-config.txt中service.vgroup_mapping的後綴對應
spring.cloud.alibaba.seata.tx-service-group=order-service-group

logging.level.io.seata = debug

# 數據源配置
spring.datasource.druid.url=jdbc:mysql://192.168.87.133:3306/seata_order?allowMultiQueries=true
spring.datasource.druid.driverClassName=com.mysql.jdbc.Driver
spring.datasource.druid.username=root
spring.datasource.druid.password=123456

registry.conf: 

(ps:order-service,storage-service中resource下的registry.conf與seata-server的conf下registry.conf配置一致)

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }
}
config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"
  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }
}

2.2 storage-service

application.properties:

spring.application.name=storage-service
server.port=9092

# Nacos 註冊中心地址
spring.cloud.nacos.discovery.server-addr = 192.168.87.133:8848

# seata 服務分組,要與服務端nacos-config.txt中service.vgroup_mapping的後綴對應
spring.cloud.alibaba.seata.tx-service-group=storage-service-group
logging.level.io.seata = debug

# 數據源配置
spring.datasource.druid.url=jdbc:mysql://192.168.87.133:3306/seata_storage?allowMultiQueries=true
spring.datasource.druid.driverClassName=com.mysql.jdbc.Driver
spring.datasource.druid.username=root
spring.datasource.druid.password=123456

registry.conf:

(ps:order-service,storage-service中resource下的registry.conf與seata-server的conf下registry.conf配置一致)

3.測試

3.1 啓動成功:

nacos registry, storage-service 192.168.xxxx.xxx:9092 register finished

nacos registry, order-service 192.168.xxxx.xxx:9091 register finished

3.2 分佈式事務成功,模擬正常下單、扣庫存

ps:這裏我測試了三次,storage_tbl表product-1的count初始值是9999999,因此測試沒有問題;

3.3 分佈式事務失敗,模擬下單成功、扣庫存失敗,最終同時回滾

order-service:

2020-01-03 15:10:56.195  INFO 5020 --- [nio-9091-exec-3] i.seata.tm.api.DefaultGlobalTransaction  : [192.168.87.133:8091:2031140107] rollback status: Rollbacked
2020-01-03 15:10:56.196 ERROR 5020 --- [nio-9091-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is feign.FeignException: status 500 reading StorageFeignClient#deduct(String,Integer); content:
{"timestamp":"2020-01-03T07:10:56.166+0000","status":500,"error":"Internal Server Error","message":"異常:模擬業務異常:Storage branch exception","path":"/storage/deduct"}] with root cause

feign.FeignException: status 500 reading StorageFeignClient#deduct(String,Integer); content:
{"timestamp":"2020-01-03T07:10:56.166+0000","status":500,"error":"Internal Server Error","message":"異常:模擬業務異常:Storage branch exception","path":"/storage/deduct"}

 

storage-service: 

java.lang.RuntimeException: 異常:模擬業務異常:Storage branch exception
	at com.lucifer.storage.service.StorageService.deduct(StorageService.java:37) ~[classes/:na]
	at com.lucifer.storage.service.StorageService$$FastClassBySpringCGLIB$$89a96fbd.invoke(<generated>) ~[classes/:na]
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-5.0.10.RELEASE.jar:5.0.10.RELEASE]

ps:兩個數據庫表中均無數據變化,測試成功。

================================================================================================

ps:談談我在springcloud-nacos-seata整合的過程遇到的問題:

i.s.c.r.netty.NettyClientChannelManager : no available server to connect.

注意幾點:

1.application.propeties 的各個配置項,注意spring.cloud.alibaba.seata.tx-service-group 是服務組名稱,與nacos-config.txt 配置的service.vgroup_mapping.${your-service-gruop}具有對應關係;

2.每個應用的resource裏需要配置一個registry.conf,demo中與seata-server裏的配置相同。(也就是seata-server中的registry.conf跟你demo中的resources下的registry.conf均需要配置,並且一致

ps:我出現的問題是registry.conf第2點的每個應用的resource裏需要配置一個registry.conf配置了,但是seata-server的conf目錄下的registry.conf沒有配置。我只配置了其中的conf/nacos-config.txt。

3.初始化seata 的nacos配置。

參考:springcloud-nacos-seata官方demo

 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章