Kafka生產者分區優化

經過前面幾篇kafka生產者專題講解,我們還可以找出哪些地方進一步來對它進行優化的嗎?答案是肯定的,這裏我們介紹一個kafka當前最新版本2.4.0合入的一個KIP-480,它的核心邏輯就是當存在無key的序列消息時,我們消息發送的分區優先保持粘連,如果當前分區下的batch已經滿了或者 linger.ms延遲時間已到開始發送,就會重新啓動一個新的分區(邏輯還是按照Round-Robin模式),我們先把兩種模式的示意圖整理如下:

那我們也來看下這種模式的源碼實現:

它的源碼實現是從Partitioner的接口開始修改的,之前的版本這個接口只有兩個方法:

public interface Partitioner extends Configurable, Closeable {

    /**
     * Compute the partition for the given record.
     *
     * @param topic The topic name
     * @param key The key to partition on (or null if no key)
     * @param keyBytes The serialized key to partition on( or null if no key)
     * @param value The value to partition on or null
     * @param valueBytes The serialized value to partition on or null
     * @param cluster The current cluster metadata
     */
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster);

    /**
     * This is called when partitioner is closed.
     */
    public void close();

}

最新的Partitioner接口添加了一個onNewBatch方法,用來在新建了一個Batch的場景下進行觸發,它的源碼如下:

public interface Partitioner extends Configurable, Closeable {

    /**
     * Compute the partition for the given record.
     *
     * @param topic The topic name
     * @param key The key to partition on (or null if no key)
     * @param keyBytes The serialized key to partition on( or null if no key)
     * @param value The value to partition on or null
     * @param valueBytes The serialized value to partition on or null
     * @param cluster The current cluster metadata
     */
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster);

    /**
     * This is called when partitioner is closed.
     */
    public void close();


    /**
     * Notifies the partitioner a new batch is about to be created. When using the sticky partitioner,
     * this method can change the chosen sticky partition for the new batch. 
     * @param topic The topic name
     * @param cluster The current cluster metadata
     * @param prevPartition The partition previously selected for the record that triggered a new batch
     */
    default public void onNewBatch(String topic, Cluster cluster, int prevPartition) {
    }
}

老的分區模式,我們在之前已經講解過,這邊主要講解下這個新的方法實現。

public class StickyPartitionCache {
    private final ConcurrentMap<String, Integer> indexCache;
    public StickyPartitionCache() {
	    //用來緩存所有的分區信息
        this.indexCache = new ConcurrentHashMap<>();
    }

    public int partition(String topic, Cluster cluster) {
	    //如果緩存可以獲取,說明之前已經有過該topic的分區信息
        Integer part = indexCache.get(topic);
        if (part == null) {
		//否則觸發獲取新的分區算法
            return nextPartition(topic, cluster, -1);
        }
        return part;
    }

    public int nextPartition(String topic, Cluster cluster, int prevPartition) {
        List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
        Integer oldPart = indexCache.get(topic);
        Integer newPart = oldPart;
        // 由於該方法有兩種觸發場景,一種是該topic下沒有任何分區緩存信息(例如新增topic);另外一種就是新的Batch產生了,需要觸發新的分區,所以它的進入條件也是這兩種模式
        if (oldPart == null || oldPart == prevPartition) {
		    //接下來所有分區邏輯採取的是和老的Roud-Robin模式一致,邏輯不同的地方是在於這裏都是無Key的場景
            List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
            if (availablePartitions.size() < 1) {
                Integer random = Utils.toPositive(ThreadLocalRandom.current().nextInt());
                newPart = random % partitions.size();
            } else if (availablePartitions.size() == 1) {
                newPart = availablePartitions.get(0).partition();
            } else {
                while (newPart == null || newPart.equals(oldPart)) {
                    Integer random = Utils.toPositive(ThreadLocalRandom.current().nextInt());
                    newPart = availablePartitions.get(random % availablePartitions.size()).partition();
                }
            }
            // 當時新增topic分區場景,那就直接添加,否則就是更換分區場景,將新的分區替換老的分區
            if (oldPart == null) {
                indexCache.putIfAbsent(topic, newPart);
            } else {
                indexCache.replace(topic, prevPartition, newPart);
            }
            return indexCache.get(topic);
        }
        return indexCache.get(topic);
    }
}

瞭解完新的分區模式邏輯之後,我們會有一個疑問,那是在什麼時候觸發的新分區邏輯呢?是在KafkaProducer的doSend方法裏面有如下一段邏輯:

//嘗試向之前的分區裏面append消息
RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
                    serializedValue, headers, interceptCallback, remainingWaitMs, true);
//由於需要新創建Batch append沒有成功
if (result.abortForNewBatch) {
                int prevPartition = partition;
				//觸發新的分區
                partitioner.onNewBatch(record.topic(), cluster, prevPartition);
				//再次獲取新的分區值
                partition = partition(record, serializedKey, serializedValue, cluster);
				//封裝TopicPartition
                tp = new TopicPartition(record.topic(), partition);
                if (log.isTraceEnabled()) {
                    log.trace("Retrying append due to new batch creation for topic {} partition {}. The old partition was {}", record.topic(), partition, prevPartition);
                }
                // producer callback will make sure to call both 'callback' and interceptor callback
                interceptCallback = new InterceptorCallback<>(callback, this.interceptors, tp);
			    //再次append消息
                result = accumulator.append(tp, timestamp, serializedKey,
                    serializedValue, headers, interceptCallback, remainingWaitMs, false);
            }

這種模式一個最大的優勢在於可以最大限度的保障每個batch的消息足夠多,並且不至於會有過多的空batch提前申請,因爲默認分區模式下,一組序列消息總是會被分散到各個分區中,會導致每個batch的消息不夠大,最終會導致客戶端請求頻次過多,而Sticky的模式可以降低請求頻次,提升整體發送遲延。如下兩個圖示官方壓測時延對比:

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章