ELK是Elasticsearch、Logstash、Kibana的簡稱,這三者是核心套件,但並非全部。
Elasticsearch是實時全文搜索和分析引擎,提供蒐集、分析、存儲數據三大功能;是一套開放REST和JAVA API等結構提供高效搜索功能,可擴展的分佈式系統。它構建於Apache Lucene搜索引擎庫之上。
Logstash是一個用來蒐集、分析、過濾日誌的工具。它支持幾乎任何類型的日誌,包括系統日誌、錯誤日誌和自定義應用程序日誌。它可以從許多來源接收日誌,這些來源包括 syslog、消息傳遞(例如 RabbitMQ)和JMX,它能夠以多種方式輸出數據,包括電子郵件、websockets和Elasticsearch。
Kibana是一個基於Web的圖形界面,用於搜索、分析和可視化存儲在 Elasticsearch指標中的日誌數據。它利用Elasticsearch的REST接口來檢索數據,不僅允許用戶創建他們自己的數據的定製儀表板視圖,還允許他們以特殊的方式查詢和過濾數據
一.環境準備
關閉Selinux
關閉防火牆
Centos7.2 mini
A: 192.168.1.241 es && kibana && nginx
B: 192.168.1.242 logstach
C: 192.168.1.221 Filebeat代理 (client):將其日誌發送到Logstash的客戶端服務器
每臺服務器都安裝Java環境,1.8以上 jdk-8u131-linux-x64.rpm
rpm -ivh jdk-8u131-linux-x64.rpm
[root@logstach java]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
[root@logstach java]# which java
/usr/bin/java
這裏說明下,用rpm包方式安裝的Java 默認的安裝路徑是在/usr/java下,要記住:
vi /etc/profile
JAVA_HOME=/usr/java/jdk1.8.0_131
JRE_HOME=/usr/java/jdk1.8.0_131/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
source /etc/profile
二.安裝logsstach
在服務器B上:
vim /etc/yum.repos.d/elasticsearch.repo #增加ELK源
[logstash-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum makecache
yum install logstash -y # logstash-5.5.1
cd /usr/share/logstash
bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
[root@logstach logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
09:36:20.791 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
09:36:20.899 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
09:36:21.008 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
hello world
{
"@timestamp" => 2017-08-12T01:36:29.687Z,
"@version" => "1",
"host" => "0.0.0.0",
"message" => "hello world"
}
出現紅色字體的報錯忽略即可,logstash.agent - Successfully started Logstash API endpoint {:port=>9600}出現後,輸入hello world 即可。
修改環境變量:
vi /etc/profile.d/logstash.sh
export PATH=/usr/share/logstash/bin:$PATH
source /etc/profile
logstash 命令可以順便使用了
logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
創建簡單配置文件:
vi /etc/logstash/conf.d/sample.conf
input {
stdin {}
}
output {
stdout {
codec => rubydebug
}
}
[root@logstach conf.d]# logstash -f /etc/logstash/conf.d/sample.conf #啓動
生成SSL證書
由於我們將使用Filebeat將日誌從我們的客戶端服務器發送到我們的ELK服務器,我們需要創建一個SSL證書和密鑰對。 Filebeat使用該證書來驗證ELK Server的身份。使用以下命令創建將存儲證書和私鑰的目錄:
使用以下命令(在ELK服務器的FQDN中替換)在適當的位置(/etc/pki/tls/)中生成SSL證書和私鑰:
[root@linuxprobe ~]# cd /etc/pki/tls
[root@linuxprobe tls]# openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
[root@linuxprobe ~]# cd /etc/pki/tls
[root@linuxprobe tls]# openssl req -subj '/CN=kibana.aniu.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
/etc/pki/tls/openssl.cnf
找到 [ v3_ca ]
加入下邊一行:
subjectAltName = IP:這裏寫ip地址
subjectAltName =192.168.1.242 (logstash IP)
否則,filebeat啓動會報錯!
filebeat x509: cannot validate certificate for 192.168.1.242 because it doesn't conta
logstash-forwarder.crt文件將被複制到,所有將日誌發送到Logstash的服務器 ( D上 )
三.安裝es && kibana
在服務器A上:
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum makecache
yum install elasticsearch -y
systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
[root@es bin]# ./elasticsearch
Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:150)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:122)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:316)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.Command.main(Command.java:88)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)
Refer to the log for complete error details.
這個錯誤我覺得主要是因爲找不到配置文件,但是如果你直接在安裝目錄裏去啓動elasticsearch的話,elasticsearch只會在當前目錄找config文件夾,如果安裝成service的形式應該是可以找到配置文件,但我沒去嘗試,後面試試。
問題知道了,我們可以直接把/etc目錄下的elasticsearch配置文件copy過來:
cp -r /etc/elasticsearch /usr/share/elasticsearch/config
這個時候我們再啓動就不會報剛纔的錯誤了,我們再試一遍:bin/elasticsearch
意料之中,這時候會提示以下錯誤:
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.1.2.jar:5.1.2] Caused by: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:100) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:176) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:306) ~[elasticsearch-5.1.2.jar:5.1.2] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.1.2.jar:5.1.2] ... 6 more
這個錯誤的原因是elasticsearch不允許使用root啓動,因此我們要解決這個問題需要新建一個用戶來啓動elasticsearch(參考:https://my.oschina.net/topeagle/blog/591451?fromerr=mzOr2qzZ)
具體操作如下:
~ groupadd elsearch
~ useradd elsearch -g elsearch -p elsearch
~ cd /usr/share
chown -R elsearch:elsearch elasticsearch
su elsearch
這個時候在這個用戶去啓動elasticsearch,一般情況下這個時候就能成功起來了,可能還會出現一些錯誤,如:
hcw-X450VC% ./elasticsearch2017-01-17 21:03:31,158 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) at java.lang.SecurityManager.checkPermission(SecurityManager.java:585) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:541) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:258) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:206) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197) at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:125) at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:67) at org.elasticsearch.cli.Command.main(Command.java:85) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82)
這是因爲elasticsearch需要讀寫配置文件,我們需要給予config文件夾權限,上面新建了elsearch用戶,elsearch用戶不具備讀寫權限,因此還是會報錯,解決方法是切換到管理員賬戶,賦予權限即可:
sudo -i chmod -R 775 config
這個時候就可以起來了,來看看效果:
[root@es ~]# curl 127.0.0.1:9200
{
"name" : "tZhA-Rw",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "OzC1IJd3Sg66bwDv7AAUHw",
"version" : {
"number" : "5.5.1",
"build_hash" : "19c13d0",
"build_date" : "2017-07-18T20:44:24.823Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
&qunt;t醙lmne" : "You Know, for Search"
這裏如果想開房外網訪問:
{ "name" : "qO-BHYV", "cluster_name" : "elasticsearch", "cluster_uuid" : "7nmo0Io_SDOQ5Gt7AV7fjw", "version" : { "number" : "5.5.1", "build_hash" : "19c13d0", "build_date" : "2017-07-18T20:44:24.823Z", "build_snapshot" : false, "lucene_version" : "6.6.0" }, "tagline" : "You Know, for Search" }
需要修改/etc/elasticsearch/elasticsearch.yml這個文件(雖然前面我們複製它到了/usr/share/elasticsearch/config)下了,但配置文件生效的確是/etc/elasticsearch/elasticsearch.yml
這點要特別的注意:
cluster.name: ptsearch # 組名(同一個組,組名必須一致)
node.name: yunwei-ts-100-70 # 節點名稱,建議和主機名一致
path.data: /data/elasticsearch # 數據存放的路徑
path.logs: /var/log/elasticsearch/ # 日誌存放的路徑
bootstrap.mlockall: true # 鎖住內存,不被使用到交換分區去
network.host: 0.0.0.0 # 網絡設置
http.port: 9200 # 端口
discovery.zen.ping.unicast.hosts: ["172.16.100.71","172.16.100.111"] #手動發現節點,寫本機之外的集羣節點IP地址
discovery.zen.ping.multicast.enabled: false #關閉多播模式
```
==以上配置3臺elasticsearch節點都要配,注意nodename寫每臺主機的名稱,discovery.zen.ping.unicast.hosts:xie寫本機之外的集羣節點IP地址。==
四.安裝Kibana
在服務器A上:
vi /etc/yum.repos.d/kibana.repo
yum makecache
yum install kibana -y
systemctl daemon-reload
systemctl enable kibana.service
systemctl start kibana.service
vi /etc/kibana/kibana.yml
修改 server.host: "192.168.1.241"
systemctl restart kibana.service
訪問: htpp://IP:5601 #如果出現打開頁面一直LOAD 換一個瀏覽器試試
安裝nginx反向代理
配置Kibana在localhost上監聽,必須設置一個反向代理,允許外部訪問它。本文使用Nginx來實現發向代理
創建nginx官方源來安裝nginx
vi /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
yum install nginx httpd-tools -y
[root@es kibana]# htpasswd -c -m /etc/nginx/htpasswd.users kibanaadmin
New password:
Re-type new password:
Adding password for user kibanaadmin
vi /etc/nginx/conf.d/kibana.conf
server {
listen 80;
server_name kibana.aniu.co;
access_log /var/log/nginx/kibana.aniu.co.access.log main;
error_log /var/log/nginx/kibana.aniu.co.access.log;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
systemctl start nginx
systemctl enable nginx
在WINDOWS本機的hosts文件增加記錄
192.168.1.241 kibana.aniu.co
輸入 kibanaadmin tongbang123
加載Kibana儀表板
Elastic提供了幾個樣例Kibana儀表板和Beats索引模式,可以幫助我們開始使用Kibana。雖然我們不會在本教程中使用儀表板,我們仍將加載它們,以便我們可以使用它包括的Filebeat索引模式。
首先,將示例儀表板歸檔下載到您的主目錄:
下載 wget http://download.elastic.co/beats/dashboards/beats-dashboards-1.1.1.zip
2, 解壓 unzip beats-dashboards-1.1.1.zip
3, 進入 cd beats-dashboards-1.1.1/
4, 執行 ./load.sh 或者 ./load.sh -url http://192.168.1.241:9200
將dashboard的模板配置數據存進elasticsarch裏面
在Elasticsearch中加載Filebeat索引模板
因爲我們計劃使用Filebeat將日誌發送到Elasticsearch,我們應該加載Filebeat索引模板。索引模板將配置Elasticsearch以智能方式分析傳入的Filebeat字段。
首先,將Filebeat索引模板下載到您的主目錄:
cd /usr/local/src curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
[root@linuxprobe src]# curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
設置Filebeat(添加客戶端服務器)
對於要將日誌發送到ELK服務器的每個CentOS或RHEL 7服務器,請執行以下步驟
複製ssl證書
在logstash服務器上,將先決條件教程中創建的SSL證書複製到客戶端服務器:
在C服務器上:
mkdir -p /etc/pki/tls/certs
在B服務器上:
scp /etc/pki/tls/certs/logstash-forwarder.crt [email protected]:/etc/pki/tls/certs
安裝Filebeat包
在C服務器上:
vi /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@monitor certs]# yum makecache
[root@monitor locale]# yum install filebeat -y
systemctl enable filebeat
systemctl start filebeat
vi /etc/filebeat/filebeat.yml
filebeat.prospectors: - input_type: log paths: - /var/log/secure # 新增 - /var/log/messages # 新增 - /var/log/*.logoutput.elasticsearch: hosts: ["localhost:9200"] output.logstash: hosts: ["192.168.1.241:5044"] # 修改爲ELK上Logstash的連接方式 ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] #
less /var/log/filebeat/filebeat 查看日誌