在之前的搭建elk環境中,日誌的處理流程爲:filebeat --> logstash --> elasticsearch,隨着業務量的增長,需要對架構做進一步的擴展,引入kafka集羣。日誌的處理流程變爲:filebeat --> kafka --> logstash --> elasticsearch。
架構圖如下所示:
安裝步驟跳過,不會的自行查閱文檔
這得着重講解如何將rabbitmq的日誌如何添加到ES裏,kibana將數據展現出來
首先到rabbitmq服務器,確認rabbitmq的日誌位置,/var/log/rabbitmq
更改filebeat配置文件
- type: log enabled: true paths: - /var/log/rabbitmq/rabbit*.log fields: log_topic: rabbitmq-log 如果有多個topic就多複製一段,將topic改成自己的 #------------------------------ Kafka output --------------------------------- output.kafka: enabled: true hosts: ["10.11.10.9:9092", "10.11.10.70:9092", "10.11.10.1:9092"] #topic: userinfo topics: - topic: "rabbitmq-log" when.regexp: fields.log_topic: "rabbitmq-log" 如果有多個topic就多複製一段,將topic改成自己的 partition.round_robin: reachable_only: false compression: gzip max_message_bytes: 1000000 required_acks: 1 |
此時filebeat就配置好了
登陸到logstash服務器,編輯logstash文件
input { kafka { bootstrap_servers => "10.11.10.9:9092", "10.11.10.70:9092", "10.11.10.1:9092" group_id => "rabbitmq-log" topics => ["rabbitmq-log"] codec => "plain" type => "info" } filter { if ([message]== "") { drop {} }
} output { if [fields][log_topic] == "rabbitmq-log" { elasticsearch { hosts => ["10.11.10.9:9092", "10.11.10.70:9092", "10.11.10.1:9092"] index => "secure-log-%{+YYYY.MM.dd}"
} stdout{ codec => rebydebug } } } |
重啓logstash
nohup /usr/share/logstash/bin/logstash -f /etc/logstash2/conf.d/logstash.conf >> /var/log/logstash2-stdout.log 2>>/var/log/logstash2-stderr.log & |
打開瀏覽器,登陸kibana
添加剛纔創建的topic
返回到discover,就可以看到數據了