Kafka+Zookeeper+Storm的docker化

Kafka+Zookeeper+Storm的docker化

因爲項目需要,需要把kafka、zookeeper、storm等服務組件docker化,在借鑑相關的開源dockerfile文件後,終於完成,以此記錄。

1.首先是建立基礎的Linux鏡像,針對本項目做了定製。

FROM centos
MAINTAINER [email protected]
COPY jq /usr/bin/
RUN yum  update -y && \
yum install wget -y && \
yum install openssh -y && \
yum install openssh-server -y && \
yum install vim -y && \
yum install zip unzip -y && \
yum install openssh-clients -y && \
yum groupinstall "Development Tools" -y && \
yum install python-setuptools -y && \
easy_install supervisor && \
echo 'root:root' | chpasswd && \
wget https://zcc2018.oss-cn-beijing.aliyuncs.com/jdk-8u171-linux-x64.tar.gz &&\
tar -zxvf jdk-8u171-linux-x64.tar.gz -C /opt/ &&\
JAVA_HOME=/opt/jdk1.8.0_171 && \
CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib && \
PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:HOME/bin && \
echo "export JAVA_HOME=$JAVA_HOME">>/etc/profile && \
echo "export CLASSPATH=$CLASSPATH">>/etc/profile && \
echo "export PATH=$PATH">>/etc/profile && \
ssh-keygen -A && \
chmod +x /usr/bin/jq && \
mkdir /var/run/sshd  && \
mkdir /var/log/supervisor -p && \
sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
ENV JAVA_HOME /opt/jdk1.8.0_171
ENV PATH $PATH:$JAVA_HOME/bin
ADD supervisord.conf /etc/supervisor/supervisord.conf
EXPOSE 22

· 其中jq需要從網上下載放在同一目錄下。

2.進行kafka的環境安裝[參考github項目->https://github.com/wurstmeister/kafka-docker]

FROM centos
ARG kafka_version=1.0.0
ARG scala_version=2.12
ARG glibc_version=2.27-r0
ENV KAFKA_VERSION=$kafka_version \
    SCALA_VERSION=$scala_version \
    KAFKA_HOME=/opt/kafka \
    GLIBC_VERSION=$glibc_version
ENV PATH=${PATH}:${KAFKA_HOME}/bin
COPY download-kafka.sh start-kafka.sh broker-list.sh create-topics.sh /tmp/
RUN  chmod a+x /tmp/*.sh \
 && mv /tmp/start-kafka.sh /tmp/broker-list.sh /tmp/create-topics.sh /usr/bin \
 && sync && /tmp/download-kafka.sh \
 && tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt \
 && rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \
 && ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} /opt/kafka \
 && rm /tmp/* -rf
VOLUME ["/kafka"]
CMD ["start-kafka.sh"]

3.zookeeper->[參考項目->https://github.com/wurstmeister/zookeeper-docker]

FROM centos
ENV ZOOKEEPER_VERSION 3.4.12
RUN wget -q http://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz && \
wget -q https://www.apache.org/dist/zookeeper/KEYS && \
wget -q https://www.apache.org/dist/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz.asc && \
wget -q https://www.apache.org/dist/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz.md5
RUN md5sum -c zookeeper-${ZOOKEEPER_VERSION}.tar.gz.md5 && \
gpg --import KEYS && \
gpg --verify zookeeper-${ZOOKEEPER_VERSION}.tar.gz.asc
RUN tar -xzf zookeeper-${ZOOKEEPER_VERSION}.tar.gz -C /opt
RUN cp /opt/zookeeper-${ZOOKEEPER_VERSION}/conf/zoo_sample.cfg /opt/zookeeper-${ZOOKEEPER_VERSION}/conf/zoo.cfg
ENV JAVA_HOME $JAVA_HOME
ENV ZK_HOME /opt/zookeeper-${ZOOKEEPER_VERSION}
RUN sed  -i "s|/tmp/zookeeper|$ZK_HOME/data|g" $ZK_HOME/conf/zoo.cfg; mkdir $ZK_HOME/data
EXPOSE 2181 2888 3888
WORKDIR /opt/zookeeper-${ZOOKEEPER_VERSION}
VOLUME ["/opt/zookeeper-${ZOOKEEPER_VERSION}/conf", "/opt/zookeeper-${ZOOKEEPER_VERSION}/data"]
CMD /usr/sbin/sshd && sed -i -r 's|#(log4j.appender.ROLLINGFILE.MaxBackupIndex.*)|\1|g' $ZK_HOME/conf/log4j.properties && \
sed -i -r 's|#autopurge|autopurge|g' $ZK_HOME/conf/zoo.cfg && \
/opt/zookeeper-${ZOOKEEPER_VERSION}/bin/zkServer.sh start-foreground 

4.storm ->[參考項目-> https://github.com/wurstmeister/storm-docker]

FROM centos
RUN wget -q -O - https://zcc2018.oss-cn-beijing.aliyuncs.com/apache-storm-1.2.1.tar.gz | tar -xzf - -C /opt
ENV STORM_HOME /opt/apache-storm-1.2.1
RUN groupadd storm; useradd --gid storm --home-dir /home/storm --create-home --shell /bin/bash storm; chown -R storm:storm $STORM_HOME; mkdir /var/log/storm ; chown -R storm:storm /var/log/storm
RUN ln -s $STORM_HOME/bin/storm /usr/bin/storm
ADD storm.yaml $STORM_HOME/conf/storm.yaml
ADD cluster.xml $STORM_HOME/logback/cluster.xml
ADD config-supervisord.sh /usr/bin/config-supervisord.sh
ADD start-supervisor.sh /usr/bin/start-supervisor.sh 
RUN chmod a+x /usr/bin/config-supervisord.sh && \
chmod a+x /usr/bin/start-supervisor.sh 
RUN mkdir /etc/supervisor/conf.d/ -p

5.Docker-compose.yml【僅供參考】

version: '3'
services:
    #zookeeper
    zookeeper:
        image: zookeeper
        ports:
          - "2181:2181"
          - "10122:22"
        networks:
        - analyser_cluster
        #kafka
    kafka: 
        image: kafka
        ports:
            - "9092:9092"
        environment:
            KAFKA_ADVERTISED_HOST_NAME: "kafka"
            KAFKA_ADVERTISED_PORT: "9092"
            KAFKA_ZOOKEEPER_CONNECT: "zk:2181"
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        links:
          - zookeeper:zk
        depends_on:
            - zookeeper
        networks:
        - analyser_cluster
    #storm ->nimbus
    nimbus:
        image: storm_nimbus
        ports:
            - "13773:3773"
            - "13772:3772"
            - "16627:6627"
            - "10322:22"
        links: 
            - zookeeper:zk
        depends_on:
            - zookeeper
        networks:
        - analyser_cluster
    #storm ->supervisor
    supervisor:
        image: storm_supervisor
        ports:
            - "8000:8000"
            - "10422:22"
        links:
            - nimbus:nimbus
            - zookeeper:zk
        depends_on:
            - zookeeper
        networks:
        - analyser_cluster
    #storm -> ui
    ui:
        image: storm_ui
        ports:
            - "18080:8080"
            - "10522:22"
        links:
          - nimbus:nimbus
          - zookeeper:zk 
        depends_on:
          - zookeeper
        networks:
        - analyser_cluster
networks:
  analyser_cluster:

主要是針對開源項目做了針對自己項目需求的定製,以及將一些軟件源替換爲國內源。還有針對docker過程中一些無法運行的地方進行了修改,
其實本項目還有elk集羣,redis,mysql等與上述kafka、storm的鏈接,但是因爲時間有限、所以沒有發出來,等後續有時間再補上吧。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章