ELK採集nginx錯誤日誌

ELK採集nginx錯誤日誌

一、filebeat採集配置

1、在nginx服務器上安裝filebeat

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.1-x86_64.rpm
yum localinstall filebeat-6.3.1-x86_64.rpm

2、配置filebeat採集文件

vim /etc/filebeat/filebeat.yml

logging.level: info
logging.to_files: true
logging.files:
  path: /data/logs/filebeat
  name: filebeat.log
  keepfiles: 7
  permissions: 0644

filebeat.inputs:
- type: log
  enabled: true
  exclude_lines: ['\\x']
  fields:
    log-type: nginx-access-logs
  paths:
    - /data/logs_nginx/*.json.log

- type: log
  enabled: true
  fields:
    log-type: nginx-error-logs
  paths:
    - /data/logs_nginx/error.log

output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

  # message topic selection + partitioning
  topic: '%{[fields][log-type]}'
  partition.hash:
    reachable_only: false

  required_acks: 1
  compression: snappy
  max_message_bytes: 1000000

4、啓動filebeat

 systemctl start filebeat

二、配置logstash過濾規則並存儲到elasticsearch

1、添加nginx錯誤日誌grok表達式

cd /usr/share/logstash/patterns/

vim nginx

NGINX_ERROR_LOG (?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<clientip>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?

2、配置logstash過濾nginx日誌規則

cd /etc/logstash/conf.d
vim nginx-error.conf
input{
    kafka{
        bootstrap_servers => ["kafka1:9092,kafka2:9092,kafka3:9092"]
        client_id => "nginx-error-logs"
        group_id => "logstash"
        auto_offset_reset => "latest"
        consumer_threads => 10
        decorate_events => true 
        topics => ["nginx-error-logs"] 
        type => "nginx-error-logs"
        codec => json {charset => "UTF-8"} 
    }
}


filter {
  if [fields][log-type] == "nginx-error-logs" {
   grok {
       match => [ "message" , "%{NGINX_ERROR_LOG}"]
    }
    geoip {
        database =>"/usr/share/logstash/GeoLite2-City/GeoLite2-City.mmdb"
        source => "clientip"
    }
    date {
      timezone => "Asia/Shanghai"
      match => ["timestamp","yyyy/MM/dd HH:mm:ss"]
    }

  }
}



output {

  if [fields][log-type] == "nginx-error-logs" {
    elasticsearch {
      hosts => ["http://es1:9200","http://es2:9200","http://es3:9200"]
      index => "nginx-error-%{+YYYY.MM.dd}"
    }
  }

}

3、重啓logstash

systemctl restart logstash
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章