OpenShift 4 之Kafka(1)-部署Strimzi Operator運行Kafka應用

關於Strimzi

Strimzi目前是CNCF的一個sandbox級別項目,它通過Operator來維護在Kubernetes上運行Apache Kafka生態系統,目前主要由Red Hat參與開發和維護。我們可以在Operator上部署Strimzi Operator來快速部署以一套Kafka集羣環境,包括Zookeeper集羣,Kafka集羣,負責User和Topic管理的Operator、爲HTTP客戶提供訪問的Kafka Bridge、適配外部事件源的Kafka Connect、多數據中心消息同步的Kafka Mirror Maker等資源。
在這裏插入圖片描述

場景說明

本文在OpenShift部署Strimzi Operator,然後通過它運行一個Kafka集羣,在其上通過Topic在消息的Producer和Consumer之間傳遞消息。
環境:OpenShift 4.2/OpenShift 4.3

安裝Strimzi Operator

  1. 創建kafka項目。
$ oc new-project kafka
  1. 使用缺省配製在Kafka項目中安裝Strimzi Operator,成功後可在Installed Operators中看到Strimzi,並且還可在項目中看到以下運行的Pod和API資源。
    在這裏插入圖片描述
$ oc get pod -n kafka
NAME                                               READY   STATUS    RESTARTS   AGE
strimzi-cluster-operator-v0.17.0-cc65586fc-rqmck   1/1     Running   0          99s

$ oc api-resources --api-group='kafka.strimzi.io'
NAME                 SHORTNAMES   APIGROUP           NAMESPACED   KIND
kafkabridges         kb           kafka.strimzi.io   true         KafkaBridge
kafkaconnectors      kctr         kafka.strimzi.io   true         KafkaConnector
kafkaconnects        kc           kafka.strimzi.io   true         KafkaConnect
kafkaconnects2is     kcs2i        kafka.strimzi.io   true         KafkaConnectS2I
kafkamirrormaker2s   kmm2         kafka.strimzi.io   true         KafkaMirrorMaker2
kafkamirrormakers    kmm          kafka.strimzi.io   true         KafkaMirrorMaker
kafkas               k            kafka.strimzi.io   true         Kafka
kafkatopics          kt           kafka.strimzi.io   true         KafkaTopic
kafkausers           ku           kafka.strimzi.io   true         KafkaUser

創建Kafka Cluster

  1. 創建kafka-broker-my-cluster.yaml文件,內容如下,
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.4.0
    replicas: 1
    listeners:
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      log.message.format.version: "2.4"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 1
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}
  1. 執行命令,創建Kafka Cluster。
$ oc -n kafka apply -f kafka-broker-my-cluster.yaml
kafka.kafka.strimzi.io/my-cluster created
 
$ oc get Kafka
NAME         DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS
my-cluster   1                        1
  1. (可選)以上(1-2)步也可通過在OpenShift Console的Strimzi Operator中使用缺省的配置創建一個Kafka Cluster。
    在這裏插入圖片描述
  2. 查看名額外my-cluster的Kafka Cluster運行Pod,可以看到集羣中每個模塊都是由多個Pod運行的。
$ oc get pods -n kafka
NAME                                               READY   STATUS    RESTARTS   AGE
my-cluster-entity-operator-6f676b98cd-vldgw        3/3     Running   0          5m16s
my-cluster-kafka-0                                 2/2     Running   1          6m23s
my-cluster-zookeeper-0                             2/2     Running   0          7m28s
strimzi-cluster-operator-v0.17.0-cc65586fc-rqmck   1/1     Running   0          12m

創建Kafka Topic

  1. 創建如下kafka-topic-my-topic.yaml文件,內容定義了名爲my-topic的KafkaTopic對象。
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaTopic
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 10
  replicas: 1
  1. 執行命令,創建Kafka Topic,然後查看kafkatopics資源。
$ oc apply -f kafka-topic-my-topic.yaml
kafkatopic.kafka.strimzi.io/my-topic created

$ oc get kafkatopics
NAME       PARTITIONS   REPLICATION FACTOR
my-topic   10           1

測試驗證

下面我們是用運行在容器中的Kafka客戶端進行驗證測試。

  1. 在第一個Terminal運行producer,然後輸入字符串。
$ KAFKA_TOPIC=${1:-'my-topic'}
$ KAFKA_CLUSTER_NS=${2:-'kafka'}
$ KAFKA_CLUSTER_NAME=${3:-'my-cluster'}
$ oc -n $KAFKA_CLUSTER_NS run kafka-producer -ti \
 --image=strimzi/kafka:0.15.0-kafka-2.3.1 \
 --rm=true --restart=Never \
 -- bin/kafka-console-producer.sh\
 --broker-list $KAFKA_CLUSTER_NAME-$KAFKA_CLUSTER_NS-bootstrap:9092 \
 --topic $KAFKA_TOPIC
  1. 在第二個Terminal運行consumer,確認可以接收到producer發送的字符串。
$ KAFKA_TOPIC=${1:-'my-topic'}
$ KAFKA_CLUSTER_NS=${2:-'kafka'}
$ KAFKA_CLUSTER_NAME=${3:-'my-cluster'}
$ oc -n $KAFKA_CLUSTER_NS run kafka-consumer -ti \
    --image=strimzi/kafka:0.15.0-kafka-2.3.1 \
    --rm=true --restart=Never \
    -- bin/kafka-console-consumer.sh \
    --bootstrap-server $KAFKA_CLUSTER_NAME-$KAFKA_CLUSTER_NS-bootstrap:9092 \
    --topic $KAFKA_TOPIC --from-beginning
  1. 在第三個Terminal查看運行的Pod。
$ oc get pod -n kafka
NAME                                               READY   STATUS    RESTARTS   AGE
kafka-consumer                                     1/1     Running   0          8m8s
kafka-producer                                     1/1     Running   0          47m
my-cluster-entity-operator-6f676b98cd-vldgw        3/3     Running   0          72m
my-cluster-kafka-0                                 2/2     Running   1          73m
my-cluster-zookeeper-0                             2/2     Running   0          74m
strimzi-cluster-operator-v0.17.0-cc65586fc-rqmck   1/1     Running   0          79m

參考

  • https://strimzi.io/docs/overview/latest
  • https://www.github.com/redhat-developer-demos/knative-tutorial
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章