參考:https://docs.spring.io/spring-kafka/reference/html/#replying-template
使用Spring Request-Reply實現基於Kafka的同步請求響應
一、應用場景
使用消息轉發可以實現業務解耦,這種轉發基於kafka的同步請求方式。
二、使用實踐
1. 版本要求
需要支持ReplyingKafkaTemplate
的spring版本,即 2.1.3 及以上版本,kafka版本無要求
2. 注意事項
暫不支持配置文件方式(yml\properties),需要java代碼實現config
3. Producer
3.1 producer配置
核心點:
配置ReplyingKafkaTemplate,其repliesContainer中設置REPLY_TOPIC:“REPLY_ASYN_MESSAGE”
配置示例:
@Configuration
@EnableKafka
public class KafkaProducerConfig {
/**
* 同步的kafka需要ReplyingKafkaTemplate,指定repliesContainer
* @param producerFactory
* @param repliesContainer
* @return
*/
@Bean
public ReplyingKafkaTemplate<String, String, String> replyingTemplate(
ProducerFactory<String, String> producerFactory,
ConcurrentMessageListenerContainer<String, String> repliesContainer) {
ReplyingKafkaTemplate template = new ReplyingKafkaTemplate<>(producerFactory, repliesContainer);
//同步相應超時時間:10s
template.setReplyTimeout(10000);
return template;
}
@Bean
public ProducerFactory<String,String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.128.100.100:9092");
props.put(ProducerConfig.RETRIES_CONFIG, 3);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
//發送一次message最大大小,默認是1M,這裏設置爲20M
//props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 20971520);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
/**
* 指定consumer返回數據到指定的topic
* @return
*/
@Bean
public ConcurrentMessageListenerContainer<String, String> repliesContainer(ConcurrentKafkaListenerContainerFactory<String, String> containerFactory) {
ConcurrentMessageListenerContainer<String, String> repliesContainer =
containerFactory.createContainer("REPLY_ASYN_MESSAGE");
repliesContainer.getContainerProperties().setGroupId("replies_message_group");
repliesContainer.setAutoStartup(false);
return repliesContainer;
}
}
3.2 producer發送
核心點:
設置ProducerRecord,其包含發送topic,header中設置reply_topic
發送消息代碼:
@Component
@Slf4j
public class AsynchronousMessageProducer {
@Autowired
private ReplyingKafkaTemplate replyingKafkaTemplate;
/**
* 發送消息數據,同步返回結果
* @param paraMessageBO
*/
public String sendMessage(MessageBO paraMessageBO){
String returnValue = null;
String message = null;
try {
message = new ObjectMapper().writeValueAsString(paraMessageBO);
log.info("同步發送消息數據start:" + message);
//發送topic
ProducerRecord<String, String> record = new ProducerRecord<>("ASYN_MESSAGE", message);
//回覆topic
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "REPLY_ASYN_MESSAGE".getBytes()));
RequestReplyFuture<String, String, String> replyFuture = replyingKafkaTemplate.sendAndReceive(record);
SendResult<String, String> sendResult = replyFuture.getSendFuture().get();
log.info("Sent ok: " + sendResult.getRecordMetadata());
ConsumerRecord<String, String> consumerRecord = replyFuture.get();
returnValue = consumerRecord.value();
log.info("Return value: " + returnValue);
log.info("同步發送消息數據end。");
}catch (Exception e){
log.error("同步發送消息失敗 MESSAGE:"+message,e.getMessage());
}
return returnValue;
}
}
4. Consumer
4.1 consumer配置
核心點:
containerFactory中設置kafkaTemplate
配置代碼示例:
@Configuration
@EnableKafka
public class KafkaConsumerConfig {
@Autowired
private KafkaTemplate kafkaTemplate;
@Bean
ConcurrentKafkaListenerContainerFactory<String, String> containerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(3000);
//設置kafkaTemplate支持sendTo
factory.setReplyTemplate(kafkaTemplate);
return factory;
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.128.100.100:9092");
//默認的group_id
props.put(ConsumerConfig.GROUP_ID_CONFIG, "message-group");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return props;
}
}
4.2 consumer消費
核心點:
指定containerFactory,否則無法收到此條message,並加入@SendTo,否則無法返回給producer
消費消息並返回:
@Component
@Slf4j
public class MessageConsumer {
@KafkaListener(topics = "ASYN_MESSAGE",containerFactory = "containerFactory")
@SendTo
public String consumerAsyn(String receiveMessage){
return "I GOT IT!";
}
}
三、實踐說明
1. Consumer測試說明
監聽了containerFactory | 未監聽containerFactory | |
---|---|---|
加了@SendTo | 則會只選擇其中一個consumer進行消費,且作同步返回 | spring kafka會報錯 |
未加@SendTo | 則會只選擇其中一個consumer進行消費,無法返回,producer會有超時的提醒 | 無法消費,producer會有超時提醒 |
所以要監聽containerFactory同時加了@SendTo,並且containerFactory中要設置kafkaTemplate
2.遇到問題
- 如若提示kafkaListenerContainerFactory無法注入(問題見:https://stackoverflow.com/questions/54698353/spring-kafka-consumerfactory-bean-not-found),需要new DefaultKafkaProducerFactory<>(producerConfigs())
- 如若提示consumer確少group-id,需要在每個listener中指定groupId,eg;
@KafkaListener(topics = “IMAGE_MESSAGE”,groupId = “image-message-group”)
3. 注意事項
kafka應用中,通常會有多個consumer監聽同一個topic(無論是不同應用作爲消費端,還是同一個應用集羣部署造成的多consumer),但是在同步消息返回這塊,爲了consumer的同步處理,需要有一樣的邏輯處理代碼,來保證producer得到一致的結果。