一、介紹與相關資料
1. 相關地址
2. 組件分工
filebeat:負責日誌抓取與日誌聚合
kafka: 削峯填谷
logstash:結構化日誌信息,並把字段transform成對應的類型
elasticsearch:負責存儲和查詢日誌信息
kibana:通過ui展示日誌信息
二、Kafka安裝部署
1. 下載鏡像
docker pull zookeeper:latest
docker pull wurstmeister/kafka:latest
2.啓動
docker run -d --name zookeeper --publish 2181:2181 --volume /etc/localtime:/etc/localtime zookeeper:latest
docker run -d --name kafka --publish 9092:9092 \
--link zookeeper \
--env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=kafka所在宿主機的IP \
--env KAFKA_ADVERTISED_PORT=9092 \
--volume /etc/localtime:/etc/localtime \
wurstmeister/kafka:latest
3.測試
- 進入容器
docker exec -it kafka /bin/bash
# 進入bin目錄
cd /opt/kafka_2.12-2.3.0/bin/
- 創建topic
./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic elk_kafka
- 運行生產者並指定topic
./kafka-console-producer.sh --broker-list localhost:9092 --topic elk_kafka
- 新開窗口運行消費者,指定同樣的topic
./kafka-console-consumer.sh --bootstrap-server --topic elk_kafka--from-beginning
在生產者窗口鍵入消息,在消費者窗口若能看到,表示kafka搭建成功
三、ELK安裝部署
1.修改mmap計數大於等於262144的限制
#在/etc/sysctl.conf文件最後添加一行
vm.max_map_count=655360
#並執行命令
sysctl -p
2.下載並運行鏡像
docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 --name elk -d sebp/elk:651
3.配置文件映射至宿主機
#複製elasticsearch的配置出來
mkdir /opt/elk/elasticsearch/conf
docker cp elk:/etc/elasticsearch/elasticsearch.yml /opt/elk/elasticsearch/conf
#複製logstash的配置出來
mkdir /opt/elk/logstash/conf
docker cp elk:/etc/logstash/conf.d/. /opt/elk/logstash/conf/
4.修改es配置文件elasticsearch.yml
#修改cluster.name參數 自己定義
cluster.name: my-es
#在最後新增以下三個參數
thread_pool.bulk.queue_size: 1000
http.cors.enabled: true
http.cors.allow-origin: "*"
5.準備logstash的patterns文件
mkdir /opt/elk/logstash/patterns
touch java.patterns
vim java.patterns
內容如下
MYAPPNAME ([0-9a-zA-Z_-]*)
MYTHREADNAME ([0-9a-zA-Z._-]|\(|\)|\s)*
6.修改02-beats-input.conf,kafkaIp與topics自行修改
input {
kafka {
bootstrap_servers => ["kafkaIp:9092"]
auto_offset_reset => "latest"
consumer_threads => 5
decorate_events => true
group_id => "elk"
topics => ["elk_kafka_test"]
type => "bhy"
codec => json {
charset => "UTF-8"
}
}
}
7.修改10-syslog.conf中日誌結構化配置
filter {
if [fields][docType] == "sys-log" {
grok {
patterns_dir => ["/opt/elk/logstash/patterns"]
match => { "message" => "\[%{NOTSPACE:appName}:%{NOTSPACE:serverIp}:%{NOTSPACE:serverPort}\] %{TIMESTAMP_ISO8601:logTime} %{LOGLEVEL:logLevel} %{WORD:pid} \[%{MYAPPNAME:traceId}\] \[%{MYTHREADNAME:threadName}\] %{NOTSPACE:classname} %{GREEDYDATA:message}" }
overwrite => ["message"]
}
date {
match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS Z"]
}
date {
match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"]
target => "timestamp"
locale => "en"
timezone => "+08:00"
}
mutate {
remove_field => "logTime"
remove_field => "@version"
remove_field => "host"
remove_field => "offset"
}
}
}
8.修改30-output.conf配置,索引自定義
output {
stdout {}
elasticsearch {
hosts => ["192.168.1.252:9200"]
index => "test-elk-%{+YYYY.MM.dd}"
}
}
9.創建運行腳本
- vim /opt/elk/start.sh
docker stop elk
docker rm elk
docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 \
-e LS_HEAP_SIZE="1g" -e ES_JAVA_OPTS="-Xms2g -Xmx2g" \
-v $PWD/elasticsearch/data:/var/lib/elasticsearch \
-v $PWD/elasticsearch/plugins:/opt/elasticsearch/plugins \
-v $PWD/logstash/conf:/etc/logstash/conf.d \
-v $PWD/logstash/patterns:/opt/logstash/patterns \
-v $PWD/elasticsearch/conf/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml \
-v $PWD/elasticsearch/log:/var/log/elasticsearch \
-v $PWD/logstash/log:/var/log/logstash \
--name elk \
-d sebp/elk:651
10.運行鏡像
./start.sh
四、Filebeat安裝部署
1.官網下載
2.修改配置文件filebeat.yml
- filebeat.inputs改爲如下,其中paths爲項目日誌路徑
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/log/*.log
exclude_lines: ['\sDEBUG\s\d']
exclude_files: ['sc-admin.*.log$']
fields:
docType: sys-log
project: microservices-platform
multiline:
pattern: '^\[\S+:\S+:\d{2,}] '
negate: true
match: after
- output.kafka改爲如下, kafkaIp與topic自行修改[
與第二步第6節中一致
]
output.kafka:
hosts: ["kafkaIp:9092"]
topic: "elk_kafka_test"
##超時
##partition策略必須爲random、round_robin或者hash的其中一個
##在分區程序選擇下一個分區之前,設置要發佈到同一分區的事件數。默認值爲1,表示每次事件後將選擇下一個分區。
partition.round_robin:
reachable_only: false
##ACK可靠性級別要求,0:無響應,1:等待本地提交,-1:等待所有副本提交,默認值爲1
required_acks: 1
## none,snappy,lz4或gzip其中一個
compression: gzip
##超過1000000字節的事件會被丟棄
max_message_bytes: 100000000
3.啓動Filebeat
./filebeat -c filebeat.yml -e
5.驗證
1. 上傳日誌文件至filebeat.yml中paths路徑
2. Kibana的Management頁籤中創建es索引,名稱爲30-output.conf中配置的index
3. Kibana的Discover頁籤中查看日誌如下圖
本頁面可以條件查詢,詳情查看,自行探索
到此ELK+Kafka+FileBeat統一日誌中心搭建完畢!!!