有了前面服務端的基礎,客戶端代碼比較好理解,在一些方面代碼是一樣的。
我們從註解@EnableDistributedTransaction開始,這個註解是開啓事物客戶端的唯一註解。
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@Documented
@Import(value = {TCAutoConfiguration.class, DependenciesImportSelector.class})
public @interface EnableDistributedTransaction {
boolean enableTxc() default true;
}
@Configuration
@ComponentScan(
excludeFilters = @ComponentScan.Filter(
type = FilterType.ASPECTJ, pattern = "com.codingapi.txlcn.tc.core.transaction.txc..*"
)
)
//import兩個類一個logger,另一個類是空實現
@Import({TxLoggerConfiguration.class, TracingAutoConfiguration.class})
public class TCAutoConfiguration {
/**
* All initialization about TX-LCN
*
* @param applicationContext Spring ApplicationContext
* @return TX-LCN custom runner
*/
@Bean
public ApplicationRunner txLcnApplicationRunner(ApplicationContext applicationContext) {
return new TxLcnApplicationRunner(applicationContext);
}
@Bean
@ConditionalOnMissingBean
public ModIdProvider modIdProvider(ConfigurableEnvironment environment,
@Autowired(required = false) ServerProperties serverProperties) {
return () -> ApplicationInformation.modId(environment, serverProperties);
}
}
代碼比服務端少,從註釋上看所有的功能都在構建ApplicationRunner上
public void run(ApplicationArguments args) throws Exception {
Map<String, TxLcnInitializer> runnerMap = applicationContext.getBeansOfType(TxLcnInitializer.class);
initializers = runnerMap.values().stream().sorted(Comparator.comparing(TxLcnInitializer::order))
.collect(Collectors.toList());
for (TxLcnInitializer txLcnInitializer : initializers) {
txLcnInitializer.init();
}
}
代碼和服務端是一樣的,也是找到所有的TxLcnInitializer,然後調用其init方法
三個log模塊不做細說,RpcNettyInitializer在服務端講解已經說了
1、DTXCheckingInitialization分佈式事物檢測初始化器
public class DTXCheckingInitialization implements TxLcnInitializer {
private final DTXChecking dtxChecking;
private final TransactionCleanTemplate transactionCleanTemplate;
@Autowired
public DTXCheckingInitialization(DTXChecking dtxChecking, TransactionCleanTemplate transactionCleanTemplate) {
this.dtxChecking = dtxChecking;
this.transactionCleanTemplate = transactionCleanTemplate;
}
@Override
public void init() throws Exception {
if (dtxChecking instanceof SimpleDTXChecking) {
((SimpleDTXChecking) dtxChecking).setTransactionCleanTemplate(transactionCleanTemplate);
}
}
}
代碼很簡單,該類持有兩個對象,分佈式事物檢測器與事物清理模板,init根據DTXChecking的類型設置了事物清理模板。
2、TCRpcServer客戶端RPCserver
public void init() throws Exception {
// rpc timeout (ms)
if (rpcConfig.getWaitTime() <= 5) {
rpcConfig.setWaitTime(1000);
}
// rpc client init.
rpcClientInitializer.init(TxManagerHost.parserList(txClientConfig.getManagerAddress()), false);
}
NettyRpcClientInitializer#init
public void init(List<TxManagerHost> hosts, boolean sync) {
NettyContext.type = NettyType.client;
NettyContext.params = hosts;
workerGroup = new NioEventLoopGroup();
for (TxManagerHost host : hosts) {
Optional<Future> future = connect(new InetSocketAddress(host.getHost(), host.getPort()));
if (sync && future.isPresent()) {
try {
future.get().get(10, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
log.error(e.getMessage(), e);
}
}
}
}
@Override
public synchronized Optional<Future> connect(SocketAddress socketAddress) {
for (int i = 0; i < rpcConfig.getReconnectCount(); i++) {
if (SocketManager.getInstance().noConnect(socketAddress)) {
try {
log.info("Try connect socket({}) - count {}", socketAddress, i + 1);
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000);
b.handler(nettyRpcClientChannelInitializer);
return Optional.of(b.connect(socketAddress).syncUninterruptibly());
} catch (Exception e) {
log.warn("Connect socket({}) fail. {}ms latter try again.", socketAddress, rpcConfig.getReconnectDelay());
try {
Thread.sleep(rpcConfig.getReconnectDelay());
} catch (InterruptedException e1) {
e1.printStackTrace();
}
continue;
}
}
// 忽略已連接的連接
return Optional.empty();
}
log.warn("Finally, netty connection fail , socket is {}", socketAddress);
clientInitCallBack.connectFail(socketAddress.toString());
return Optional.empty();
}
這裏啓動了一個netty客戶端,根據manager-address配置連接到了服務器端。
上面的connect方法還實現了一個功能就是重連機制,根據配置的重連次數ReconnectCount(默認8)進行重連。默認重試8次,間隔6秒。
NettyRpcClientChannelInitializer實現了ChannelInitializer在啓動客戶端調用initChannel方法
protected void initChannel(Channel ch) throws Exception {
//下面兩項同服務端
ch.pipeline().addLast(new LengthFieldPrepender(4, false));
ch.pipeline().addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE,
0, 4, 0, 4));
//下面兩項同服務端
ch.pipeline().addLast(new ObjectSerializerEncoder());
ch.pipeline().addLast(new ObjectSerializerDecoder());
//下面兩項同服務端
ch.pipeline().addLast(rpcCmdDecoder);
ch.pipeline().addLast(new RpcCmdEncoder());
//斷線重連的handler
ch.pipeline().addLast(nettyClientRetryHandler);
//同服務端,但是功能少了一個功能
ch.pipeline().addLast(socketManagerInitHandler);
//同服務端
ch.pipeline().addLast(rpcAnswerHandler);
}
與服務端相比少了一個IdleStateHandler用於心跳檢測,所以socketManagerInitHandler中少了一個功能即userEventTriggered不會被調用。
多了一個nettyClientRetryHandler,主要有兩個作用
1、重連機制,默認8次間隔6秒。
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
super.channelInactive(ctx);
log.error("keepSize:{},nowSize:{}", keepSize, SocketManager.getInstance().currentSize());
SocketAddress socketAddress = ctx.channel().remoteAddress();
log.error("socketAddress:{} ", socketAddress);
//斷線重連
NettyRpcClientInitializer.reConnect(socketAddress);
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("NettyClientRetryHandler - exception . ", cause);
if (cause instanceof ConnectException) {
int size = SocketManager.getInstance().currentSize();
Thread.sleep(1000 * 15);
log.error("current size:{} ", size);
log.error("try connect tx-manager:{} ", ctx.channel().remoteAddress());
//斷線重連
NettyRpcClientInitializer.reConnect(ctx.channel().remoteAddress());
}
//發送數據包檢測是否斷開連接.
ctx.writeAndFlush(heartCmd);
}
public static void reConnect(SocketAddress socketAddress) {
Objects.requireNonNull(socketAddress, "non support!");
INSTANCE.connect(socketAddress);
}
public synchronized Optional<Future> connect(SocketAddress socketAddress) {
for (int i = 0; i < rpcConfig.getReconnectCount(); i++) {
if (SocketManager.getInstance().noConnect(socketAddress)) {
try {
log.info("Try connect socket({}) - count {}", socketAddress, i + 1);
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000);
b.handler(nettyRpcClientChannelInitializer);
return Optional.of(b.connect(socketAddress).syncUninterruptibly());
} catch (Exception e) {
log.warn("Connect socket({}) fail. {}ms latter try again.", socketAddress, rpcConfig.getReconnectDelay());
try {
Thread.sleep(rpcConfig.getReconnectDelay());
} catch (InterruptedException e1) {
e1.printStackTrace();
}
continue;
}
}
// 忽略已連接的連接
return Optional.empty();
}
log.warn("Finally, netty connection fail , socket is {}", socketAddress);
clientInitCallBack.connectFail(socketAddress.toString());
return Optional.empty();
}
當鏈接斷開或者發生異常時會進行重連機制,可以看到這裏就是調用了connet方法進行重連的。
2、連接成功後回調機制。
回調作用
2.1、從服務端獲取機器id、分佈式事物超時時間、最大等待時間等參數(客戶端不能配置這些參數需要以服務端爲準)
2.2、如果服務端啓動的數量大於客戶端配置的服務器數量,會通過回調使得客戶端連接所有的服務端。
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
keepSize = NettyContext.currentParam(List.class).size();
//回調函數
clientInitCallBack.connected(ctx.channel().remoteAddress().toString());
}
public void connected(String remoteKey) {
//監聽,在連接成功執行,此處爲空實現
rpcEnvStatusListeners.forEach(rpcEnvStatusListener -> rpcEnvStatusListener.onConnected(remoteKey));
new Thread(() -> {
try {
log.info("Send init message to TM[{}]", remoteKey);
MessageDto msg = rpcClient.request(
remoteKey, MessageCreator.initClient(applicationName, modIdProvider.modId()), 5000);
if (MessageUtils.statusOk(msg)) {
//每一次建立連接時將會獲取最新的時間
InitClientParams resParams = msg.loadBean(InitClientParams.class);
// 1. 設置DTX Time 、 TM RPC timeout 和 MachineId
txClientConfig.applyDtxTime(resParams.getDtxTime());
txClientConfig.applyTmRpcTimeout(resParams.getTmRpcTimeout());
txClientConfig.applyMachineId(resParams.getMachineId());
// 2. IdGen 初始化
IdGenInit.applyDefaultIdGen(resParams.getSeqLen(), resParams.getMachineId());
// 3. 日誌
log.info("Finally, determined dtx time is {}ms, tm rpc timeout is {} ms, machineId is {}",
resParams.getDtxTime(), resParams.getTmRpcTimeout(), resParams.getMachineId());
// 4. 執行其它監聽器
rpcEnvStatusListeners.forEach(rpcEnvStatusListener -> rpcEnvStatusListener.onInitialized(remoteKey));
return;
}
log.error("TM[{}] exception. connect fail!", remoteKey);
} catch (RpcException e) {
log.error("Send init message exception: {}. connect fail!", e.getMessage());
}
}).start();
}
客戶端基本就這些,總體來說和服務端基本一致