完整K8S日誌系統解決方案:fluentd+elasticsearch+kibana,網上坑太多,研究3整天,望一文章解決所有問題

1、環境準備

k8s環境:
master:10.XX.XX.XX, node: 10.XX.XX.XX,10.XX.XX.XX,10.XX.XX.XX
對外發布地址:10.41.10.60
ceph環境:
manage:10.41.10.81,node: 10.41.10.XX,XX,XX
docker 鏡像倉庫環境:
10.41.10.81
以上環境在看本文時,應該是全部完成。

2、下載docker鏡像

這一步網上的坑太多,提供大多是gcr.io.*的鏡像,並且版本非常低,類如: gcr.io/google_containers/elasticsearch:v2.4.1或者gcr.io/google_containers/fluentd-elasticsearch:1.22,因gcr.io被牆,有的文章建議使用docker.io/bigwhite/。但我的測試結果是這些鏡像都不能用,測試了很多方法,也許是我沒有找到最正確的方法吧,那就讓它成爲過去吧。
正確的姿勢:

#在鏡像倉庫服務器上下載鏡像
docker pull openfirmware/fluentd-elasticsearch
docker tag openfirmware/fluentd-elasticsearch 10.41.10.81:5000/openfirmware/fluentd-elasticsearch
docker pull docker.io/elasticsearch:6.8.8
docker pull docker.io/kibana:6.8.8
docker tag docker.io/kibana:6.8.8 10.41.10.81:5000/kibana
docker tag docker.io/elasticsearch:6.8.8 10.41.10.81:5000/elasticsearch
docker push 10.41.10.81:5000/elasticsearch
docker push 10.41.10.81:5000/kibana
docker push 10.41.10.81:5000/openfirmware/fluentd-elasticsearch

3、部署fluentd應用

進入k8s的master,按步驟執行如下命令

#首先我們做一個配置文件圖,這個配置文件用於每個節點上的fluentd應用。
#這裏我們使用K8S裏強大的configmap功能
cat <<eof >fluentd-configmap.yaml
apiVersion: v1
data:
  td-agent.conf: |
    #<source>
    #  type tail
    #  format json
    #  path /var/log/*.log
    #  pos_file /var/log/log.pos
    #  tag var.log
    #</source>
    <source>
      type tail
      format none
      path /var/log/containers/*.log
      pos_file /var/log/containers.pos
      tag containers_log
    </source>
    <match **>
      type elasticsearch
      log_level info
      include_tag_key true
      hosts 10.105.206.227:9200
      logstash_format true
      index_name k8s_
      buffer_chunk_limit 2M
      buffer_queue_limit 32
      flush_interval 5s
      max_retry_wait 30
      disable_retry_limit
    </match>
kind: ConfigMap
metadata:
  name: fluentd-config
eof 
kubectl apply -f  fluentd-configmap.yaml

上面我們配置的hosts 10.105.206.227:9200,這個是elasticsearch後端接口地址,後面我們還需要再修改。
然後我們配置一個DaemonSet,讓所有節點自動運行fluentd應用。

cat <<eof >fluentd-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-ds
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
      - name: fluentd-ds
        image: 10.41.10.81:5000/openfirmware/fluentd-elasticsearch
        env:
        - name: restartenv
          value: "4"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: containers
          mountPath: /var/lib/docker/containers
        - name: td-agent-config
          mountPath: /etc/td-agent
        - name: docker-volume
          mountPath: /var/docker/lib/containers
      serviceAccountName: fluentd-admin
      serviceAccount: fluentd-admin
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: containers
        hostPath:
          path: /var/lib/docker/containers
      - name: td-agent-config
        configMap:
          name: fluentd-config
      - name: docker-volume
        hostPath:
          path: /var/docker/lib/containers #這一條很重要,否則fluentd啓動後雖然能發現/var/log/containers裏的日誌,但卻無法讀取,因爲它們是鏈接文件,鏈接到/var/docker/lib/containers。這一步坑了我好久!!!
eof
kubectl apply -f fluentd-ds.yaml  #生效

我們看,這時三臺fluentd都正常啓動了
在這裏插入圖片描述在這裏插入圖片描述

4、部署elasticsearch

首先我們需要添加一個持久存儲與存儲聲明,方便保存數據

cat<<eof > fluentd-elasticsearch-pv-pvc.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fluentd-elasticsearch-pv
spec:
  capacity:
    storage: 80Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - 10.41.10.81:6789,10.41.10.82:6789,10.41.10.83:6789
    path: /fluentd_elasticsearch_data
    user: admin
    readOnly: false
    secretRef:
      name: ceph-secret
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fluentd-elasticsearch-pvc
spec:
  accessModes:
    - ReadWriteMany
  volumeName: fluentd-elasticsearch-pv
  resources:
    requests:
      storage: 80Gi
eof
kubectl apply -f fluentd-elasticsearch-pv-pvc.yaml

看一下執行效果
在這裏插入圖片描述然後通過ReplicationController添加elasticsearch的應用編排

cat<<eof > elasticsearch-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: elasticsearch-rc
spec:
  replicas: 1
  selector:
      node: elasticsearch-rc
  template:
    metadata:
      labels:
        node: elasticsearch-rc
    spec:
      containers:
      - image: 10.41.10.81:5000/elasticsearch 
        name: elasticsearch-rc
        env:
        - name: restart
          value: "2"
        securityContext:
           privileged: true
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: es-persistent-storage
        persistentVolumeClaim:
          claimName: fluentd-elasticsearch-pvc
eof
kubectl apply -f elasticsearch-rc.yaml

查看一下效果
在這裏插入圖片描述在這裏插入圖片描述下一步,我們需要添加一個服務,內部發布9200與9300端口,來供集羣內部被fluentd與kibana應用連接

cat<<eof > elasticsearch-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-svc
spec:
  ports:
  - port: 9200
    name: db
    targetPort: 9200
  - port: 9300
    name: transport
    targetPort: 9300
  selector:
    node: elasticsearch-rc
eof
kubectl apply -f elasticsearch-svc.yaml

看一下效果
在這裏插入圖片描述在這裏插入圖片描述
此時我們需要調整一下前面已經配置好的fluentd的配置圖,因爲需要fluentd訪問這個elasticsearch服務。
只需要修改fluentd-configmap.yaml文件中的hosts 10.105.206.227:9200記錄,將hosts後面的ip地址修改爲這裏看到的集羣地址。

5、部署kibana

還是一樣,我們首先配置配置圖configmap,實現統一的對kibana做配置文件

cat<<eof > kibana.yml 
# Kibana is served by a back end server. This controls which port to use.
server.port: 5601

# The host to bind the server to.
server.host: "0.0.0.0"

# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""

# The maximum payload size in bytes on incoming server requests.
# server.maxPayloadBytes: 1048576

# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://10.105.206.227:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
# elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
# kibana.index: ".kibana"

# The default application to load.
# kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"

# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key

# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem

# Set to false to have a complete disregard for the validity of the SSL
# certificate.
# elasticsearch.ssl.verify: true

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 30000

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers.
# elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000

# Set the path to where you would like the process id file to be created.
# pid.file: /var/run/kibana.pid

# If you would like to send the log output to a file you can set the path below.
# logging.dest: stdout

# Set this to true to suppress all logging output.
# logging.silent: false

# Set this to true to suppress all logging output except for error messages.
# logging.quiet: false

# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: false
eof

以上是帶有官方原版的配置文件,我只是修改了端口、elasticsearch地址、以及監聽地址。
然後通過它生成一個configmap

kubectl create configmap kibana-config --from-file=kibana.yml
kubectl get configmap

在這裏插入圖片描述
下面我需要配置ReplicationController應用編排

cat<<eof > kibana-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: kibana-rc
spec:
  replicas: 1
  selector:
      node: kibana
  template:
    metadata:
      labels:
        node: kibana
    spec:
      containers:
      - name: kibana-rc
        image: 10.41.10.81:5000/kibana
        env:
        - name: re
          value: "1"
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config
        ports:
        - containerPort: 5601
      volumes:
      - name: config
        configMap:
          name: kibana-config
eof
kubectl apply -f kibana-rc.yaml 

在這裏插入圖片描述繼續,添加服務,實現外部訪問kibana

cat<<eof > kibana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kibana-svc
spec:
  ports:
  - port: 5601
    targetPort: 5601
  selector:
    node: kibana
  type: NodePort
  externalIPs:
  - 10.41.10.60
eof
kubectl apply -f kibana-svc.yaml

在這裏插入圖片描述在這裏插入圖片描述
至此,全部工作完成,我們使用瀏覽器打開10.41.10.60
在這裏插入圖片描述在這裏插入圖片描述
附:
我遇到一個情況是啓動Kibana時卡在“server is not ready yet” 這裏,我的解決方案是這樣的:

curl -XDELETE http://10.105.206.227:9200/.kibana*
kubectl delete rc kibana-rc  #重啓kibana
kubectl apply -f kibana-rc.yaml #重啓kibana
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章