ELK 實時日誌分析系統平臺的學習與使用
- ElasticSearch
- Logstash
- Kibana
- Marvel
- Log
- Linux
簡介
工作工程中,不論是開發還是運維,都會遇到各種各樣的日誌,主要包括系統日誌、應用程序日誌和安全日誌,對於開發人員來說,查看日誌,可以實時查看程序的運行錯誤,以及性能分析,通常,一個大中型的應用程序會被部署到多臺服務器,那日誌文件也會分散到不同的機器上,這樣查看日誌難道要一臺一臺去查看?顯然是太麻煩了,開源的日誌分析系統 ELK 完美的解決了這個問題。
ELK 並不是一個獨立的系統,她是由 ElasticSearch、Logstash、Kibana 三個開源的工具組成。
ElasticSearch
ElasticSearch是一個基於Lucene的搜索服務器。它提供了一個分佈式多用戶能力的全文搜索引擎,基於RESTful web接口。Elasticsearch是用Java開發的,並作爲Apache許可條款下的開放源碼發佈,是當前流行的企業級搜索引擎。設計用於雲計算中,能夠達到實時搜索,穩定,可靠,快速,安裝使用方便。
Logstash
Logstash 是一個開源的日誌分析、收集工具,並將日誌存儲以供以後使用。
Kibana
Kibana 是一個爲 Logstash 和 ElasticSearch 提供的日誌分析的 Web 接口。可使用它對日誌進行高效的搜索、可視化、分析等各種操作。
搭建方法
基於一臺主機的搭建,沒有使用多臺集羣,logstah 收集日誌後直接寫入 elasticseach,可以用 redis 來作爲日誌隊列
jdk 安裝
jdk 1.8 安裝
elasticseach 安裝
下載地址:https://www.elastic.co/downloads,選擇相應的版本
我這裏的版本是 elasticsearch-2.4.0解壓目錄:
[phachon@localhost elk]$ tar -zxf elasticsearch-2.4.0 [phachon@localhost elasticsearch-2.4.0]$
安裝 head 插件
[phachon@localhost elasticsearch-2.4.0]$./bin/plugin install mobz/elasticsearch-head [phachon@localhost elasticsearch-2.4.0]$ ls plugins/ head
編輯 elasticseach 的配置文件
[phachon@localhost elasticsearch-2.4.0]$ vim config/elasticseach.yml 13# ---------------------------------- Cluster ----------------------------------- 14 # 15 # Use a descriptive name for your cluster: 16 # 17 cluster.name: es_cluster #這裏是你的el集羣的名稱 18 # 19 # ------------------------------------ Node ------------------------------------ 20 # 21 # Use a descriptive name for the node: 22 # 23 node.name: node0 # elseach 集羣中的節點 24 # 25 # Add custom attributes to the node: 26 # 27 # node.rack: r1 28 # 29 # ----------------------------------- Paths ------------------------------------ 30 # 31 # Path to directory where to store the data (separate multiple locations by comma): 32 # 33 path.data: /tmp/elasticseach/data #設置 data 目錄 34 # 35 # Path to log files: 36 # 37 path.logs: /tmp/elasticseach/logs # 設置 logs 目錄 # 39 # ----------------------------------- Memory ----------------------------------- 40 # 41 # Lock the memory on startup: 42 # 43 # bootstrap.memory_lock: true 44 # 45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory 46 # available on the system and that the owner of the process is allowed to use this limit. 47 # 48 # Elasticsearch performs poorly when the system is swapping the memory. 49 # 50 # ---------------------------------- Network ----------------------------------- 51 # 52 # Set the bind address to a specific IP (IPv4 or IPv6): 53 # 54 # network.host: 192.168.0.1 55 network.host: 192.168.30.128 # 這裏配置本機的 ip 地址,這個是我的虛擬機的 ip 56 # 57 # Set a custom port for HTTP: 58 # 59 http.port: 9200 # 默認的端口
其他配置可先不設置
啓動 elstaicseach[root@localhost elasticsearch-2.4.0]$ ./bin/elasticsearch
注意,這裏肯定會報錯:
[root@localhost elasticsearch-2.4.0]# ./bin/elasticsearch Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root. at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:94) at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:160) at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35) Refer to the log for complete error details.
之前在網上搜的教程這裏都沒有詳細說明,導致花了很長時間卡在這裏安裝不成功。
提示的原因已經說的很清楚了,不能以 root 權限來安裝 elasticseach
爲 elseach 添加專門的用戶組和用戶[root@localhost elasticsearch-2.4.0]# groupadd elseach [root@localhost elasticsearch-2.4.0]# adduser -G elseach elseach [root@localhost elasticsearch-2.4.0]# passwd elseach 123456
將 elasticseach 的安裝目錄設置爲 elseach 用戶組和用戶所有
[root@localhost elk]# chown -R elseach:elseach elasticsearch-2.4.0/
別忘了將 /tmp/elasticseach/data 和 /tmp/elasticseach/logs 目錄也設置爲 elseach 用戶所有,要不然會沒有權限讀寫
[root@localhost tmp]# chown -R elseach:elseach elasticseach/
好了。終於設置完畢。切換到 elseach 重新啓動
[elseach@localhost elasticsearch-2.4.0]# ./bin/elasticsearch
[2016-09-22 01:51:42,102][WARN ][bootstrap] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP andCONFIG_SECCOMP_FILTER compiled in [2016-09-22 01:51:42,496][INFO ][node] [node0] version[2.4.0], pid[4205], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-09-22 01:51:42,496][INFO ][node] [node0] initializing ... [2016-09-22 01:51:43,266][INFO ][plugins] [node0] modules [reindex, lang-expression, lang-groovy], plugins [head], sites [head] [2016-09-22 01:51:43,290][INFO ][env] [node0] using [1] data paths, mounts [[/ (/dev/sda5)]], net usable_space [8.4gb], net total_space [14.6gb], spins?[possibly], types [ext4] [2016-09-22 01:51:43,290][INFO ][env] [node0] heap size [998.4mb], compressed ordinary object pointers [unknown] [2016-09-22 01:51:43,290][WARN ][env] [node0] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least[65536] [2016-09-22 01:51:45,697][INFO ][node] [node0] initialized [2016-09-22 01:51:45,697][INFO ][node] [node0] starting ... [2016-09-22 01:51:45,832][INFO ][transport] [node0] publish_address {192.168.30.128:9300}, bound_addresses {192.168.30.128:9300} [2016-09-22 01:51:45,839][INFO ][discovery] [node0] es_cluster/kJMDfFMwQXGrigfknNs-_g [2016-09-22 01:51:49,039][INFO ][cluster.service] [node0] new_master {node0}{kJMDfFMwQXGrigfknNs-_g}{192.168.30.128} {192.168.30.128:9300}, reason:zen-disco-join(elected_as_master, [0] joins received) [2016-09-22 01:51:49,109][INFO ][http] [node0] publish_address {192.168.30.128:9200}, bound_addresses {192.168.30.128:9200} [2016-09-22 01:51:49,109][INFO ][node] [node0] started [2016-09-22 01:51:49,232][INFO ][gateway] [node0] recovered [2] indices into cluster_state
啓動成功
在本機瀏覽器訪問 http://192.168.30.128:9200說明搜索引擎 API 返回正常。注意要在服務器將 9200 端口打開,否則訪問失敗。
打開我們剛剛安裝的 head 插件
http://192.168.30.128:9200/_plugin/head/如果是第一次搭建好,裏面是沒有數據的,node0 節點也沒有集羣信息,這裏我搭建完成後已經添加了數據。所以顯示的有信息
Logstash安裝
下載地址:https://www.elastic.co/downloads,選擇相應的版本
我這裏的版本是 logstash-2.4.0.tar.gz
解壓目錄:[root@localhost elk]# tar -zxvf logstash-2.4.0 [root@localhost elk]# cd logstash-2.4.0
編輯 logstash 配置文件:
[root@localhost logstash-2.4.0]# mkdir config [root@localhost logstash-2.4.0]# vim config/logstash.conf
這裏因爲爲了簡單來顯示一下數據,我這裏將 apache 的日誌作爲數據源,也就是 logstash 的 input,直接輸出到 elstaticseach 裏,即 ouput
input { # For detail config for log4j as input, # See: https://www.elastic.co/guide/en/logstash/ file { type => "apache-log" # log 名 path => "/etc/httpd/logs/access_log" # log 路徑 } } filter { #Only matched data are send to output. 這裏主要是用來過濾數據 } output { # For detail config for elasticsearch as output, # See: https://www.elastic.co/guide/en/logstash/current elasticsearch { action => "index" #The operation on ES hosts => "192.168.30.128:9200" #ElasticSearch host, can be array. # elasticseach 的 host index => "apachelog" #The index to write data to. } }
使用命令來檢測配置文件是否正確
[root@localhost logstash-2.4.0]# ./bin/logstash -f config/logstash.conf --configtest Configuration OK
啓動 logstash 來收集日誌
[root@localhost logstash-2.4.0]# ./bin/logstash -f config/logstash.conf Settings: Default pipeline workers: 4 Pipeline main started
好了,logstash 可以開始收集日誌了,當日志文件有變化時,會動態的寫入到 elastaticseach 中,先讓我們來產生一些日誌吧。
刷新 http://192.168.30.128/ 一直刷新,apache 產生訪問日誌。ok,打開我們的 elasticseach 的 web 頁面 http://192.168.30.128:9200/_plugin/head/這裏就出現了我們剛剛配置的 apachelog 的日誌,點開數據瀏覽
這裏很詳細的列出了我們的日誌文件,還有字段,左邊可進行相應的搜索,右邊點擊可查看具體的日誌信息。
至此我們已經能夠收集日誌,並進行搜索,接下來我們來將搜索數據可視化成圖表Kibana 的安裝
下載:https://www.elastic.co/downloads 對應自己的版本
這裏我的版本是:kibana-4.6.1-linux-x86解壓目錄:
[root@localhost elk]# tar -zxvf kibana-4.6.1-linux-x86 [root@localhost elk]# cd kibana-4.6.1-linux-x86
編輯配置文件:
[root@localhost kibana-4.6.1-linux-x86]# vim config/kibana.yml # Kibana is served by a back end server. This controls which port to use. server.port: 5601 # kibaba 服務 port # The host to bind the server to. server.host: "192.168.30.128" # 你的kibaba 的服務host # If you are running kibana behind a proxy, and want to mount it at a path, # specify that path here. The basePath can't end in a slash. # server.basePath: "" # The maximum payload size in bytes on incoming server requests. # server.maxPayloadBytes: 1048576 # The Elasticsearch instance to use for all your queries. elasticsearch.url: "http://192.168.30.128:9200" # elastaticseach 的host # preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false, # then the host you use to connect to *this* Kibana instance will be sent. # elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations # and dashboards. It will create a new index if it doesn't already exist. kibana.index: ".kibana" # kibana # The default application to load. # kibana.defaultAppId: "discover" # If your Elasticsearch is protected with basic auth, these are the user credentials # used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
配置比較簡單
配置完成後開始運行[root@localhost kibana-4.6.1-linux-x86]# ./bin/kibana log [02:48:34.732] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.771] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [02:48:34.803] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.823] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.827] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.835] [info][status][plugin:[email protected]] Status changed from yellow to green - Kibana index ready log [02:48:34.840] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.847] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.857] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [02:48:34.867] [info][listening] Server running at http://192.168.30.128:5601
在瀏覽器運行 http://192.168.30.128:5601
這裏要先添加 index,在 輸入框輸入我們剛剛收集的 apachelog 作爲 index 名稱
點擊 create 創建
右上角選擇時間來顯示我們的數據訪問,下面是數據的訪問量
中間的搜索框可輸入搜索條件搜索,搜索完成後點擊右上角的 save seach 保存搜索數據
點擊 visualize 可以畫出其他的數據分析圖,比如餅狀圖
選擇我們剛剛保存的 chrome 的文件來生成餅狀圖
因爲數據沒什麼變化,所以只能全部是一樣的。還是點擊右上角的保存按鈕,將餅狀圖保存爲 test
添加到 面板中,點擊 dashboard
點擊 + 號添加
選擇 test 來顯示到面板,效果如下
這樣簡單的 ELK 系統就搭建起來了,當然,正真的使用環境中,我們會使用集羣搭建。利用 redis 來處理日誌隊列。
marvel 插件
Marvel是Elasticsearch的管理和監控工具,在開發環境下免費使用。擁有更好的數據圖表界面。
首先在 elastaticsearch 下安裝 marvel-agent 插件
[elseach@localhost elasticsearch-2.4.0]$ ./bin/plugin install license
[elseach@localhost elasticsearch-2.4.0]$ ./plugin install marvel-agent
這裏注意,必須先執行 license 安裝,再執行 marvel-agent 安裝,安裝完成後重啓 elastaticseach
接下來 kibana 來安裝 marvel 插件
[root@localhost kibana-4.6.1-linux-x86]# cd bin
[root@localhost bin]# ./kibana plugin --install elasticsearch/marvel/latest
安裝完成後重啓 kibana,選擇 marvel 插件
是不是感覺有點高大上。。。
好了 ELK 的基本搭建就算是完成了,接下來我們考慮如何集羣來使用這個系統。
歡迎指正, Thanks….