Filebeat日誌採集
一、軟件介紹
Filebeat
1,概述
Filebeat是一個日誌文件託運工具,在你的服務器上安裝客戶端後,filebeat會監控日誌目錄或者指定的日誌文件,追蹤讀取這些文件(追蹤文件的變化,不停的讀),並且轉發這些信息到elasticsearch或者logstarsh中存放。
2,工作流程
以下是filebeat的工作流程:當你開啓filebeat程序的時候,它會啓動一個或多個探測器(prospectors)去檢測你指定的日誌目錄或文件,對於探測器找出的每一個日誌文件,filebeat啓動收割進程(harvester),每一個收割進程讀取一個日誌文件的新內容,併發送這些新的日誌數據到處理程序(spooler),處理程序會集合這些事件,最後filebeat會發送集合的數據到你指定的地點。(個人理解,filebeat是一個輕量級的logstash,當你需要收集信息的機器配置或資源並不是特別多時,使用filebeat來收集日誌。日常使用中,filebeat十分穩定,筆者沒遇到過宕機。)
3, 相關鏈接
- 下載地址
https://www.elastic.co/downloads/beats/filebeat - 官方文檔
https://www.elastic.co/guide/en/beats/filebeat/current/index.html
二、安裝部署
1,安裝前準備
To get started with your own Filebeat setup, install and configure these related products:
* Elasticsearch for storage and indexing the data.
* Kibana for the UI.
* Logstash (optional) for inserting data into Elasticsearch.
2,windows
- 下載安裝包
- 解壓安裝包拷貝到windows下的
C:\Program Files
下面並重命名文件夾爲filebeat - 以管理員運行powershell
- 執行
cd 'C:\Program Files\Filebeat'
set-executionpolicy remotesigned
.\install-service-filebeat.ps1
- 這樣就將filebeat註冊爲系統服務。
3,linux
- 以centos7爲例,執行以下的命令
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.1.1-x86_64.rpm
sudo rpm -vi filebeat-6.1.1-x86_64.rpm
- 檢查安裝
[root@docker83 ~]# filebeat -version
filebeat version 6.1.1 (amd64), libbeat 6.1.1
4,配置filebeat
1)filebeat模塊
Filebeat提供了一組預構建模塊,您可以使用這些模塊快速實現和部署日誌監控解決方案,在大約5分鐘內完成示例儀表板和數據可視化。這些模塊支持常見的日誌格式,如Nginx、Apache2和MySQL,並且可以通過發出簡單的命令來運行.
2)安裝filebeat模塊
#安裝完畢重啓Elasticsearch
cd /opt/elasticsearch-6.1.1/bin
./elasticsearch-plugin install ingest-geoip
./elasticsearch-plugin install ingest-user-agent
查看加載模塊
3)配置filebeat
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- D:\apache-tomcat-7.0.77\logs\*
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
setup.dashboards.enabled: true
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
output.logstash:
hosts: ["192.168.8.83:5044"]
setup.kibana:
host: "192.168.8.83:5601"
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]