解壓安裝包
cd /home/hsyt/jenkins/filebeat
tar -xvf filebeat-7.4.0-linux-x86_64.tar.gz # 解壓文件到當前目錄,可以通過 -C 來指定解壓目錄
配置屬性
完整的樣例(針對修改的部分內容提供,默認的內容就不粘貼)
#=========================== Filebeat inputs =============================
#配置樣例
filebeat.inputs
- type: log
enabled: true
paths:
#- /var/log/*.log
- /home/logs/sync-*/**/*
fields:
indexprefix: uat-sync
encoding: utf-8
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
exclude_files: ['.gz$','.tar$','infra','tracelog'] ']
- type: log
enabled: true
paths:
#- /var/log/*.log
- /home/logs/**/tracelog/dubbo*.log
fields:
indexprefix: tracelog-sim
encoding: utf-8
exclude_files: ['.gz$','.tar$']
processors:
- decode_json_fields:
fields: ["time","stat.key", "count", "total.cost.milliseconds","success"]
process_array: true
max_depth: 5
target:
overwrite_keys: true
add_error_key: true
json.keys_under_root: false
json.add_error_key: true
json.overwrite_keys: true
output.console.pretty: true
#==================== Elasticsearch template setting ==========================
setup.template:
enabled: true
name: "uat-filebeat"
pattern: "uat-filebeat-*"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.1.1:9200"]
index: "%{[fields][indexprefix]}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#============================== Kibana =====================================
# Kibana Host
setup.kibana:
host: "192.168.1.1:5601"
filebeat.yml重要片段
index-lifecycle-management(ILM)策略 ILM策略這塊在目前版本yml文件裏沒有說明,是通過filebeat的啓動日誌和官網的資料找到
關閉iml策略,默認情況下,filebeat會根據這個策略來生成 index,這個index關係到後續再Kibana上查看數據時的數據過濾,可以根據要求自行設置是否關閉。在關閉IML策略的情況下必須設置Elasticsearch template setting,否則啓動會報錯,這種情況下Elasticsearch output中的自定義index規則才生效
#setup.ilm.enable: false
#默認ilm配置策略
#setup.ilm.enabled: auto
#setup.ilm.rollover_alias: "filebeat"
#setup.ilm.pattern: "{now/d}-000001"
#==================== Elasticsearch template setting ==========================
#setup.template:
# enabled: true
這裏的template和下面自定義的es index不匹配也沒關係,不影響使用,按照目前的demo,這部分配置純粹是爲了解決程序啓動報錯,實際的index規則在下面來確定
# name: "uat-filebeat"
# pattern: "uat-filebeat-*"
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
hosts部分支持數組形式,類似["192.168.1.1:9200","192.168.1.2:9200"]
# hosts: ["192.168.1.1:9200"]
%{[fields][indexprefix]} 獲取上面自定義fields屬性值
# index: "%{[fields][indexprefix]}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
kibana上看到的索引列表,kibana部分設置和使用會再提供詳細的使用說明文檔Index Pattern
filebeat.inputs 業務服務部分配置,目前filebeat沒有針對業務服務的專門的module,因此就用log類型來表示。
#=========================== Filebeat inputs =============================
#filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
下面的元素可以是個列表,每一個輸入元素用-的列表來表示
#- type: log
# Change to true to enable this input configuration.
想要使用的話需要設置開啓
#enabled: false
# Paths that should be crawled and fetched. Glob based paths.
日誌路徑支持配置多條,每一條開頭用-,也可以用**表示通配,默認會往下遞歸9層來找到匹配格式的文件
#paths:
#- /var/log/*.log
#- /home/logs/**/*.log
#- c:\programdata\elasticsearch\logs\*
#encoding: utf-8
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
根據文件名來做過濾
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
額外的自定義屬性,可以在配置中使用%{[fields][level]}來獲取,也會體現在傳入ES的數據上,方便做數據過濾
#fields:
# level: debug
# review: 1
processors:
- decode_json_fields:
#待處理json中某些屬性(可以針對文件數據原則獲取哪個json內容)
fields: ["time","stat.key", "count", "total.cost.milliseconds","success"]
process_array: true
max_depth: 5
target:
overwrite_keys: true
add_error_key: true
#控制這部分json數據在kibana上看到的節點位置
json.keys_under_root: false
json.add_error_key: true
json.overwrite_keys: true
output.console.pretty: true
#當前配置對應文件中數據格式
{
"time": "2019-09-17 17:47:24.918",
"stat.key": {
"method": "handle",
"remote.app": "mhp.rpc.pat",
"service": "com.dap.api.IService"
},
"count": 6,
"total.cost.milliseconds": 34,
"success": "Y"
}
對應kibana上看到數據的結構
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continsimion
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
匹配行首的字符,這裏支持正則匹配,根據實際情況設置
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
如果存在一條日誌有多行的情況(異常軌跡),需要設置開啓同時配置好相應的規則
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
服務啓動
cd filebeat-7.4.0-linux-x86_64
#非後臺啓動服務方式
./filebeat -e -c filebeat.yml
#後臺啓動服務方式
nohup ./filebeat -e -c filebeat.yml &
服務運行情況檢查
#查看服務進程
ps -ef|grep filebeat
#日誌信息查看
nohup 方式啓動以後原先控制檯的日誌輸出在運行腳本的目錄下的nohup.out文件
tail -fn200 /home/elk/filebeat/filebeat-7.4.0-linux-x86_64/nohup.out
使用過程中出現的問題
too many open files
2019-10-21T14:11:29.223+0800 ERROR registrar/registrar.go:416 Failed to create tempfile (/home/hsyt/jenkins/filebeat/filebeat-7.4.0-linux-x86_64/data/registry/filebeat/data.json.new) for writing: open /home/hsyt/jenkins/filebeat/filebeat-7.4.0-linux-x86_64/data/registry/filebeat/data.json.new: too many open files
由於首次批量的將歷史的數據導入到elasticSearch中,filebeat同時打開的文件數目超過限制,所以出現這類錯誤。
- 解決思路一: 刪除歷史的無用的部分信息,filebeat每次只進行增量的日誌數據同步,這樣確保每次打開的文件數目不超過限制數。
- 解決思路二:修改系統的相關限制數
自定義的索引規則不生效
按照官網介紹,當沒有設置自定義的索引規則時會默認使用IML,但是實際使用的時候設置了索引還是會使用默認的IML規則
使用自定義index時template 屬性必須設置
參考資料
這篇文章絕大多數內容都是從官網的開發文檔中找到依據,少量的參考其他資料,結合實際驗證而來.官網的相關資料都在上文相應的位置做了鏈接,此處不再重複列舉,提供一個官網地址涵蓋全部官網的參考資料。