ELK6.5.4+filebeat+kafka實時日誌分析平臺部署搭建詳細實現過程
1、ELK平臺介紹
在搜索ELK資料的時候,發現這篇文章比較好,於是摘抄一小段:
日誌主要包括系統日誌、應用程序日誌和安全日誌。系統運維和開發人員可以通過日誌瞭解服務器軟硬件信息、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日誌可以瞭解服務器的負荷,性能安全性,從而及時採取措施糾正錯誤。
通常,日誌被分散的儲存不同的設備上。如果你管理數十上百臺服務器,你還在使用依次登錄每臺機器的傳統方法查閱日誌。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日誌管理,例如:開源的syslog,將所有服務器上的日誌收集彙總。
集中化管理日誌後,日誌的統計和檢索又成爲一件比較麻煩的事情,一般我們使用grep、awk和wc等Linux命令能實現檢索和統計,但是對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。
開源實時日誌分析ELK平臺能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成。官方網站: https://www.elastic.co/product部署下載包鏈接:https://elkguide.elasticsearch.cn/logstash/get-start/hello-world.html
yum install -y net-tools lrzsz telnet vim dos2unix bash-completion\
ntpdate sysstat tcpdump traceroute nc wget
安裝jdk環境
yum install -y java-1.8.0-openjdk-devel
安裝下載elasticsearch
[root@elk ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@VM_0_9_centos ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@elk ~]# yum -y install elasticsearch
[root@elk ~]# mkdir -p /data/es-data
[root@VM_0_9_centos ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: huanqiu # 組名(同一個組,組名必須一致)
node.name: elk-node1 # 節點名稱,建議和主機名一致
path.data: /data/es-data # 數據存放的路徑
path.logs: /var/log/elasticsearch/ # 日誌存放的路徑
bootstrap.mlockall: flase # 鎖住內存,不被使用到交換分區去(通常在內部不足時,休眠的程序內存信息會交換到交換分區)
network.host: 0.0.0.0 # 網絡設置
http.port: 9200 # 端口
#增加新的參數,這樣head插件可以訪問es
http.cors.enabled: true
http.cors.allow-origin: "*"
[root@elk ~]# chown -R elasticsearch.elasticsearch /data/
[root@elk ~]# systemctl start elasticsearch
[root@elk ~]# systemctl enable elasticsearch
[root@elk ~]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-01-16 10:38:59 CST; 2s ago
Docs: http://www.elastic.co
Main PID: 3758 (java)
CGroup: /system.slice/elasticsearch.service
├─3758 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupa...
└─3812 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
Jan 16 10:38:59 elk systemd[1]: Started Elasticsearch.
Jan 16 10:38:59 elk systemd[1]: Starting Elasticsearch...
elasticsearch-head插件安裝
通過web界面來查看elasticsearch集羣狀態信息
下載安裝nodejs
[root@VM_0_9_centos ~]# wget https://nodejs.org/dist/v11.2.0/node-v11.2.0-linux-x64.tar.gz
--2019-01-16 11:22:32-- https://nodejs.org/dist/v11.2.0/node-v11.2.0-linux-x64.tar.gz
Resolving nodejs.org (nodejs.org)... 104.20.23.46, 104.20.22.46, 2606:4700:10::6814:172e, ...
Connecting to nodejs.org (nodejs.org)|104.20.23.46|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18988744 (18M) [application/gzip]
Saving to: ‘node-v11.2.0-linux-x64.tar.gz’
100%[============================================================================================>] 18,988,744 6.51MB/s in 2.8s
2019-01-16 11:22:36 (6.51 MB/s) - ‘node-v11.2.0-linux-x64.tar.gz’ saved [18988744/18988744]
[root@VM_0_9_centos ~]# ls
node-v11.2.0-linux-x64.tar.gz
[root@VM_0_9_centos ~]# tar -zxf node-v11.2.0-linux-x64.tar.gz -C /data/
[root@VM_0_9_centos ~]# cd /data/
[root@VM_0_9_centos data]# ls
es-data node-v11.2.0-linux-x64
[root@VM_0_9_centos data]# mv node-v11.2.0-linux-x64 node-v11.2.0
[root@VM_0_9_centos ~]# ln -s /data/node-v11.2.0/bin/node /usr/bin/node
[root@VM_0_9_centos ~]# node -v
v11.2.0
[root@VM_0_9_centos ~]# ln -s /data/node-v11.2.0/bin/npm /usr/bin/npm
[root@VM_0_9_centos ~]# npm -v
6.4.1
[root@VM_0_9_centos ~]# npm config set registry https://registry.npm.taobao.org
[root@VM_0_9_centos ~]# vim ~/.npmrc
registry=https://registry.npm.taobao.org
strict-ssl = false
[root@VM_0_9_centos ~]# npm install -g grunt-cli
/data/node-v11.2.0/bin/grunt -> /data/node-v11.2.0/lib/node_modules/grunt-cli/bin/grunt
- [email protected]
added 152 packages from 122 contributors in 4.226s
[root@VM_0_9_centos ~]# ln -s /data/node-v11.2.0/lib/node_modules/grunt-cli/bin/grunt /usr/bin/grunt
下載head二進制包
[root@VM_0_9_centos ~]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip
[root@VM_0_9_centos ~]# cd /data/elasticsearch-head-master/
[root@VM_0_9_centos elasticsearch-head-master]# npm install
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `node install.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-01-16T03_43_51_976Z-debug.log
[root@VM_0_9_centos elasticsearch-head-master]# npm install [email protected] --ignore-scripts
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN [email protected] license should be a valid SPDX license expression
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
- [email protected]
added 62 packages from 64 contributors and removed 4 packages in 4.037s
[root@VM_0_9_centos elasticsearch-head-master]# npm install
npm WARN [email protected] license should be a valid SPDX license expression
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 9 packages from 13 contributors in 2.87s
#如果速度較慢或安裝失敗,建議使用國內鏡像
[root@elk elasticsearch-head-master]# npm install --ignore-scripts -g cnpm --registry=https://registry.npm.taobao.org
[root@VM_0_9_centos ~]# vim /data/elasticsearch-head-master/Gruntfile.js
#port: 9100上面增加hostname地址
hostname: "0.0.0.0",
[root@VM_0_9_centos ~]# vim /data/elasticsearch-head-master/_site/app.js
#localhost替換爲IP地址
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://129.211.125.21:9200";
[root@VM_0_9_centos elasticsearch-head-master]# grunt server &
訪問IP:9100
安裝kibana
[root@VM_0_9_centos ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz
--2019-01-16 14:22:19-- https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz
Resolving artifacts.elastic.co (artifacts.elastic.co)... 184.72.242.47, 107.21.237.95, 184.73.245.233, ...
Connecting to artifacts.elastic.co (artifacts.elastic.co)|184.72.242.47|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 206631363 (197M) [application/x-gzip]
Saving to: ‘kibana-6.5.4-linux-x86_64.tar.gz’
100%[============================================================================================>] 206,631,363 7.14MB/s in 29s
2019-01-16 14:22:50 (6.79 MB/s) - ‘kibana-6.5.4-linux-x86_64.tar.gz’ saved [206631363/206631363]
[root@VM_0_9_centos ~]# ls
kibana-6.5.4-linux-x86_64.tar.gz main.py master.zip node-v11.2.0-linux-x64.tar.gz
[root@VM_0_9_centos ~]# tar -zxf kibana-6.5.4-linux-x86_64.tar.gz -C /data/
[root@VM_0_9_centos data]# mv kibana-6.5.4-linux-x86_64 kibana-6.5.4
[root@VM_0_9_centos data]# cd kibana-6.5.4/
[root@VM_0_9_centos kibana-6.5.4]# vim config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://IP:9200"
kibana.index: ".kibana"
運行
因爲他一直運行在前臺,要麼選擇開一個窗口,要麼選擇使用screen。
[root@elk kibana-6.5.4]# yum -y install screen
[root@elk kibana-6.5.4]# screen #這樣就另開啓了一個終端窗口
[root@elk kibana-6.5.4]# ./bin/kibana
然後按ctrl+a+d組合鍵,暫時斷開screen會話
這樣在上面另啓的screen屏裏啓動的kibana服務就一直運行在前臺了....
[root@elk kibana-6.5.4]# screen -ls
There is a screen on:
15041.pts-0.elk-node1 (Detached)
1 Socket in /var/run/screen/S-root.
注:screen重新連接會話
下例顯示當前有兩個處於detached狀態的screen會話,你可以使用screen -r <screen_pid>重新連接上:
[root@elk kibana-6.5.4t]# screen –ls
There are screens on:
8736.pts-1.tivf18 (Detached)
8462.pts-0.tivf18 (Detached)
2 Sockets in /root/.screen.
[root@elk kibana-6.5.4]# screen -r 8736
下面是關於部分漢化kibana教程 6.5.4版本好像不支持漢化。漢化以後有bug產生。
[root@VM_0_9_centos ~]# mkdir /data/Sinicization
[root@VM_0_9_centos ~]# cd /data/Sinicization/
[root@VM_0_9_centos Sinicization]# git clone https://github.com/anbai-inc/Kibana_Hanization
Cloning into 'Kibana_Hanization'...
remote: Enumerating objects: 218, done.
remote: Total 218 (delta 0), reused 0 (delta 0), pack-reused 218
Receiving objects: 100% (218/218), 2.03 MiB | 712.00 KiB/s, done.
Resolving deltas: 100% (98/98), done.
[root@VM_0_9_centos Sinicization]# cd Kibana_Hanization/
[root@VM_0_9_centos Kibana_Hanization]# ls
config image main.py README.md requirements.txt
[root@VM_0_9_centos Kibana_Hanization]# python main.py /data/kibana-6.5.4/
文件[/data/kibana-6.5.4/src/core_plugins/kibana/ui_setting_defaults.js]已翻譯。
文件[/data/kibana-6.5.4/src/core_plugins/kibana/index.js]已翻譯。
文件[/data/kibana-6.5.4/src/core_plugins/kibana/public/dashboard/index.js]已翻譯。
文件[/data/kibana-6.5.4/src/core_plugins/kibana/server/tutorials/kafka_logs/index.js]已翻譯。
文件[/data/kibana-6.5.4/src/core_plugins/timelion/index.js]已翻譯。
文件[/data/kibana-6.5.4/src/ui/public/chrome/directives/global_nav/global_nav.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/@elastic/eui/src/components/search_bar/search_box.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/@elastic/eui/lib/components/search_bar/search_box.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/@elastic/eui/dist/eui.min.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/@elastic/eui/dist/eui.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/@elastic/eui/es/components/search_bar/search_box.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/ml/index.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/ml/public/register_feature.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/monitoring/ui_exports.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/monitoring/public/register_feature.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/apm/public/components/app/TransactionOverview/DynamicBaseline/Button.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/index.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/public/register_feature.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/canvas_plugin/renderers/all.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/canvas_plugin/uis/arguments/all.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/canvas_plugin/uis/datasources/all.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/index.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/register_feature.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/components/manage_spaces_button.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/views/management/index.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/views/nav_control/components/spaces_description.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/infra/index.js]已翻譯。
文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/infra/public/register_feature.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/apm.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/canvas.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/timelion.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/vendors.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/ml.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/monitoring.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/kibana.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/commons.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/login.bundle.js]已翻譯。
文件[/data/kibana-6.5.4/optimize/bundles/infra.bundle.js]已翻譯。
恭喜,Kibana漢化完成!
安裝logstash
[root@VM_0_9_centos ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz
[root@VM_0_9_centos logstash-6.5.4]# tar -zxf logstash-6.5.4.tar.gz -C /data/
[root@VM_0_9_centos ~]# vim /data/logstash-6.5.4/config/test.conf
input
{
kafka
{
bootstrap_servers => "10.7.1.112:9092"
topics => "nethospital_2"
codec => "json"
}
}
output
{
if [fields][tag] == "nethospital_2"
{
elasticsearch
{
hosts => ["10.7.1.111:9200"]
index => "nethospital_2-%{+YYYY-MM-dd}"
codec => "json"
}
}
}
[root@VM_0_9_centos logstash-6.5.4]# ./bin/logstash -f config/test.conf & # -f 指定配置文件
安裝kafka
[root@VM_0_9_centos ~]# wget https://archive.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
[root@VM_0_9_centos ~]# gzip -dv kafka_2.11-1.0.0.tgz
[root@VM_0_9_centos ~]# tar -xvf kafka_2.11-1.0.0.tar
[root@VM_0_9_centos ~]# mv kafka_2.11-1.0.0 /data/
[root@VM_0_9_centos ~]# wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz
修改kafka參數及啓動
[root@VM_0_9_centos ~]# cd /data/kafka_2.11-1.0.0
[root@VM_0_9_centos kafka_2.11-1.0.0]# vim config/zookeeper.properties
dataDir=/tmp/zookeeper/data # 數據持久化路徑
clientPort=2181 # 連接端口
maxClientCnxns=100 # 最大連接數
dataLogDir=/tmp/zookeeper/logs #日誌存放路徑
tickTime=2000 # Zookeeper服務器心跳時間,單位毫秒
initLimit=10 # 投票選舉新leader的初始化時間。
啓動zookeeper
[root@VM_0_9_centos kafka_2.11-1.0.0]# ./bin/zookeeper-se
zookeeper-security-migration.sh zookeeper-server-start.sh zookeeper-server-stop.sh
[root@VM_0_9_centos kafka_2.11-1.0.0]# ./bin/zookeeper-server-start.sh config/zookeeper.properties
[root@VM_0_9_centos kafka_2.11-1.0.0]# nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties & ##後臺啓動
修改kafka參數及啓動
[root@VM_0_9_centos kafka_2.11-1.0.0]# vim config/server.properties
broker.id=0
listeners=PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/logs/kafka
num.partitions=2
num.recovery.threads.per.data.dir=1
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
[root@VM_0_9_centos kafka_2.11-1.0.0]# ./bin/zookeeper-server-start.sh config/zookeeper.properties
[root@VM_0_9_centos kafka_2.11-1.0.0]# nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties & ##後臺啓動
測試kafka
[root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
[root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-topics.sh -list -zookeeper localhost:2181
test
#啓動生產進程測試
[root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-console-producer.sh --broker-list 10.2.151.203:9092 --topic test
#啓動啓動消費者進程
[root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-console-consumer.sh --zookeeper 10.2.151.203:2181 --topic test --from-beginning
安裝filebeat
下載安裝
[root@VM_0_9_centos ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz
[root@VM_0_9_centos ~]# tar -zxf filebeat-6.2.4-linux-x86_64.tar.gz -C /data/
[root@VM_0_9_centos ~]# mv /data/filebeat-6.2.4-linux-x86_64/ /data/filebeat-6.2.4
[root@VM_0_9_centos ~]# vim /data/filebeat-6.2.4/filebeat.yml
cat filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /home/test/backup/mysql-*.log
document_type: mysql
tail_files: true
multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: after
output.kafka:
hosts: ["192.168.1.99:9092"]
topic: guo
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
[root@VM_0_9_centos filebeat-6.2.4]# nohup ./filebeat -e -c filebeat.yml &
[3] 4276
[root@VM_0_9_centos filebeat-6.2.4]# curl -XGET 'http://localhost:9200/_cat/nodes' #查看集羣狀態
或者:curl -XGET '<http://10.2.151.203:9200/_cat/nodes?v>'
curl -XGET '<http://10.2.151.203:9200/_cluster/state/nodes?pretty>'
192.168.0.9 16 95 0 0.01 0.02 0.05 mdi * sjx_node-1
查看集羣master
[root@VM_0_9_centos filebeat-6.2.4]# curl -XGET 'http://localhost:9200/_cluster/state/master_node?pretty'
或者:curl -XGET '<http://10.2.151.203:9200/_cat/master?v>'
{
"cluster_name" : "sjx",
"compressed_size_in_bytes" : 12577,
"cluster_uuid" : "Si3hj1UhTIetue5-ydYAbw",
"master_node" : "CsmmrG8jR8WQIze8RDdcxw"
}
查詢集羣的健康狀態
[root@VM_0_9_centos filebeat-6.2.4]# curl -XGET 'http://localhost:9200/_cluster/health?pretty'
或者:curl -XGET '<http://10.2.151.203:9200/_cat/health?v>'
{
"cluster_name" : "sjx",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 1,
"active_shards" : 1,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
安裝cerebro插件
cerebo是kopf在es5上的替代者,通過web界面來管理和監控elasticsearch集羣狀態信息
下載安裝
[root@VM_0_9_centos ~]# wget https://github.com/lmenezes/cerebro/releases/download/v0.8.1/cerebro-0.8.1.tgz
[root@elk ~]# tar -xf cerebro-0.8.1.tar -C /data/
[root@elk ~]# cd /data/cerebro-0.8.1/
[root@elk cerebro-0.8.1]# vim conf/application.conf
hosts = [****
{
host = "http://IP:9200"
name = "my-elk"
},
]
啓動/訪問
[root@elk cerebro-0.8.1]# ./bin/cerebro ###啓動是否有錯
nohup ./bin/cerebro & #後臺運行
[http://ip:9000
下載安裝
安裝bigdesk插件
bigdesk 統計分析和圖表化elasticsearch集羣狀態信息
#wget <https://codeload.github.com/hlstudio/bigdesk/zip/master> ##下載到本地然後rz 上傳
[root@elk ~]# unzip bigdesk-master.zip
[root@elk ]# mv bigdesk-master /usr/share/elasticsearch/plugins/
[root@elk ]# cd /usr/share/elasticsearch/plugins/bigdesk-master/_site/
使用 python -m SimpleHTTPServer 快速搭建http服務
[root@elk _site]# python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
指定端口8000
[root@elk _site]# nohup python -m SimpleHTTPServer 8000 & #後臺運行
[1] 6184
訪問:<http://IP:8000/
若是安裝kopf
通過web界面來 管理和監控 elasticsearch集羣狀態信息
[root@VM_0_9_centos ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
[root@VM_0_9_centos ~]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@VM_0_9_centos ~]# systemctl restart elasticsearch
兩臺服務器均安裝插件完畢,再進行測試