Elk的理論與實踐

搜索引擎:
    索引組件:獲取數據-->建立文檔-->文檔分析-->文檔索引(倒排索引)
    搜索組件:用戶搜索接口-->建立查詢(將用戶鍵入的信息轉換爲可處理的查詢對象)-->搜索查詢-->展現結果

    索引組件:Lucene
    搜索組件:Solr, ElasticSearch    
    注意:mysql數據庫中的myisam引擎支持全文索引,但是格式比較複雜,不適於作爲搜索

引擎的組件;
Lucene Core:
Apache LuceneTM is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.
Solr:
SolrTM is a high performance search server built using Lucene Core, with XML/HTTP and JSON/Python/Ruby APIs, hit highlighting, faceted search, caching, replication, and a web admin interface.
ElasticSearch:
Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.

Elastic Stack:
    ElasticSearch
    Logstash
        Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.)
    Beats:
        Filebeat:Log Files
        Metricbeat:Metrics
        Packetbeat:Network Data
        Winlogbeat:Windows Event Logs
        Heartbeat:Uptime Monitoring
    Kibana:
        Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack, so you can do anything from learning why you're getting paged at 2:00 a.m. to understanding the impact rain might have on your quarterly numbers.

    TF/IDF算法:   
        https://zh.wikipedia.org/wiki/Tf-idf

ES的核心組件:
    物理組件:
        集羣:
            狀態:green, yellow, red
        節點:
        Shard:

    Lucene的核心組件:
        索引(index):數據庫(database)
        類型(type):表(table)
        文檔(Document):行(row)
        映射(Mapping):

ElasticSearch 5的程序環境:
    配置文件:
        /etc/elasticsearch/elasticsearch.yml
        /etc/elasticsearch/jvm.options
        /etc/elasticsearch/log4j2.properties
    Unit File:elasticsearch.service
    程序文件:
        /usr/share/elasticsearch/bin/elasticsearch
        /usr/share/elasticsearch/bin/elasticsearch-keystore:
        /usr/share/elasticsearch/bin/elasticsearch-plugin:管理插件程序        

    搜索服務:
        9200/tcp

    集羣服務:
        9300/tcp

els集羣的工作邏輯:
    多播、單播:9300/tcp
    關鍵因素:clustername

    所有節點選舉一個主節點,負責管理整個集羣的狀態(green/yellow/red),以及各shards的分佈方式;

    插件:
elk實現框圖:

注意:elk是由elastic stack search、logstash和kibana組成的,如圖中間顏色比較暗的是elastic
stack search實現的部分,而下面的數據收集部分由logstash實現,最後kibana負責上方的圖形搜
索界面接口;但是logstash數據收集器是由JRuby語言開發的,是用ruby語言先通過java解釋器將
其翻譯成java語言,之後進行編譯執行,效率很低,故而出現了filebeat輕量級組件來代替它;
logstash是通過在每個要採集的日誌服務器植入agent組件,一旦日誌有變化就將改變的數據拉取
到logstash服務器進行數據的文檔化,之後將文檔化的數據交給elastic stack search集羣進行相
關處理。由於基於lucene的solr搜索引擎在後期沒有支持大數據分佈式的存儲,被elk所取代;
http://lucene.apache.org/ 將數據文檔化之後數據形成索引的lucene網址
https://www.elastic.co/ elk訪問地址,可以下載els鏡像
https://db-engines.com/en/ 體現數據庫地位的網址
elasticsearch集羣: elasticsearch是由java開發的
準備工作:關閉防火牆、配置chrony時間同步、用本地文件進行dns解析
https://mirrors.cnnic.cn 清華大學的elastic stack search的鏡像網站,下載速度快
yum install java-1.8.0-openjdk-devel -y
rpm -ivh elasticsearch-5.6.8.rpm java編寫的
scp elasticsearch-5.6.8.rpm server2:/root/ 複製過去後進行rpm安裝
scp elasticsearch-5.6.8.rpm server3:/root/
cd /etc/elasticsearch/
vim elasticsearch.yml
cluster.name: myels
node.name: server1
path.data: /els/data
path.logs: /els/logs 需要在外面創建目錄,設置屬組和屬主爲elasticsearch用戶
network.host: 192.168.43.60
discovery.zen.ping.unicast.hosts: ["server1","server2","server3"]
discovery.zen.minimum_master_nodes: 1 2個節點正常就可以正常使用

vim jvm.options
-Xms1g 注意初始化值和最大值要相同
-Xmx1g
mkdir /els/{data,logs} -pv && chown -R elasticsearch.elasticsearch /els/*
scp elasticsearch.yml jvm.options server2:/etc/elasticsearch/
vim elasticsearch.yml
network.host: 192.168.43.63
node.name: server2
scp elasticsearch.yml jvm.options server3:/etc/elasticsearch/
vim elasticsearch.yml
network.host: 192.168.43.62
node.name: server3
java -version
systemctl daemon-reload && systemctl start elasticsearch
ss -ntl
curl http://server1:9200/ 看測試是否成功
tail /els/logs/myels.log 可以查看日誌找錯誤
free -m 查看內存的大小,以便定虛擬機的初始化值
curl -XGET 'http://server1:9200/_cluster/health?pretty=true' 發起查詢請求

集羣配置:
elasticsearch.yml配置文件:
cluster.name: myels
node.name: node1
path.data: /data/els/data
path.logs: /data/els/logs
network.host: 0.0.0.0
http.port: 9200 9200端口是客戶端用的,9300是集羣內部進行通信的
discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]
discovery.zen.minimum_master_nodes: 2
· node.attr.rack: r1 表示可以集羣分片到不同的機架,以防止機架中交換機斷網

RESTful API:        crud操作(create、read、update、delete)
    curl  -X<VERB> '<PROTOCOL>://<HOST>:<PORT>/<PATH>?<QUERY_STRING>' -d '<BODY>'
        <BODY>:json格式的請求主體;

    <VERB>  請求方法
        GET,POST,PUT,DELETE;GET 爲默認的方法

    特殊PATH:/_cat, /_search, /_cluster

    <PATH>
        /index_name/type/Document_ID/

     curl -XGET 'http://10.1.0.67:9200/_cluster/health?pretty=true'

     curl -XGET 'http://10.1.0.67:9200/_cluster/stats?pretty=true'

    curl -XGET 'http://10.1.0.67:9200/_cat/nodes?pretty'

    curl -XGET 'http://10.1.0.67:9200/_cat/health?pretty'

curl http://server1:9200/_cat/indices 查看索引信息

    創建文檔:
        curl  -XPUT  

    特殊PATH:/_cat, /_search, /_cluster

    文檔:
        {"key1": "value1", "key2": value, ...}

ELS:分佈式、開源、RESTful、近乎實時
    集羣:一個或多個節點的集合;
    節點:運行的單個els實例;
    索引:切成多個獨立的shard;(以Lucene的視角,每個shard即爲一個獨立而完整的索引)
        primary shard:r/w
        replica shard: r

查詢:
    ELS:很多API
        _cluster, _cat, _search

    curl -X GET '<SCHEME://<HOST>:<PORT>/[INDEX/TYPE/]_search?q=KEYWORD&sort=DOMAIN:[asc|desc]&from=#&size=#&_source=DOMAIN_LIST'

        /_search:搜索所有的索引和類型;
        /INDEX_NAME/_search:搜索指定的單個索引;
        /INDEX1,INDEX2/_search:搜索指定的多個索引;
        /s*/_search:搜索所有以s開頭的索引;
        /INDEX_NAME/TYPE_NAME/_search:搜索指定的單個索引的指定類型;

    簡單字符串的語法格式
        http://lucene.apache.org/core/6_6_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package.description

    查詢類型:Query DSL,簡單字符串;

        文本匹配的查詢條件:
            (1) q=KEYWORD, 相當於q=_all:KEYWORD
            (2) q=DOMAIN:KEYWORD

                    {
                        "name" : "Docker in Action",
                        "publisher" : "wrox",
                        "datatime" : "2015-12-01",
                        "author" : "Blair"
                    }

                    _all: "Docker in Action Wrox 2015-12-01 Blair"

            修改默認查詢域:df屬性

        查詢修飾符:
            https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html

        自定義分析器:
            analyzer=

        默認操作符:OR/AND
            default_operator, 默認值爲OR

        返回字段:
            fields=

            注:5.X不支持;

        結果排序:
            sort=DOMAIN:[asc|desc]

        搜索超時:
            timeout=

        查詢結果窗口:
            from=,默認爲0;
            size=, 默認爲10;

    Lucene的查詢語法:
        q=
            KEYWORD
            DOMAIN:KEYWORD

        +DOMAIN:KEYWORD -DOMAIN:KEYWORD 

    els支持從多類型的查詢:
        Full text queries

安裝elasticsearch-head插件: 可以極大的簡化命令行查詢的複雜度
    5.X:
        (1) 設置elasticsearch.yml配置文件:
            http.cors.enabled: true
            http.cors.allow-origin: "*"

        (2) 安裝head:
            $ git clone https://github.com/mobz/elasticsearch-head.git
            $ cd elasticsearch-head
            $ npm install   出錯了,待解決
            $ npm run start

ELK:
    E: elasticsearch
    L: logstash,日誌收集工具;
        ELK Beats Platform:
            PacketBeat:網絡報文分析工具,統計收集報文信息;
            Filebeat:是logstash forwarder的替換者,因此是一個日誌收集工具;
            Topbeat:用來收集系統基礎數據,如cpu、內存、io等相關的統計信息;
            Winlogbeat
            Metricbeat
            用戶自定義beat:
    logstash的安裝:

wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-5.x/5.6.8/logstash-
5.6.8.rpm
yum install java-1.8.0-openjdk-devel
rpm -ivh logstash-5.6.8.rpm
rpm -ql logstash|grep logstash$
vim /etc/logstash/conf.d/example1.conf
input{
stdin{}
}

output{
stdout{
codec => rubydebug
}
}
/usr/share/logstash/bin/logstash -f ./example1.conf -t 測試是否有語法錯誤
/usr/share/logstash/bin/logstash -f ./example1.conf 進行運行程序

    input {
        ...
    }

    filter{
        ...
    }

    output {
        ...
    }

    簡單示例配置:

        input {
            stdin {}
        }

        output {
            stdout {
                codec => rubydebug
            }
        }

    示例2:從文件輸入數據,經grok過濾器插件過濾之後輸出至標準輸出:
        input {
            file {
                path => ["/var/log/httpd/access_log"]
                start_position => "beginning"
            }
        }

        filter {
            grok {
                match => {
                    "message" => "%{COMBINEDAPACHELOG}"
                }
                remove_field: "message"
            }
        }

        output {
            stdout {
                codec => rubydebug
            }
        }

    示例3:date filter插件示例:
            filter {
                    grok {
                            match => {
                                    "message" => "%{HTTPD_COMBINEDLOG}"
                            }
                            remove_field => "message"
                    }
                    date {
                            match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                            remove_field => "timestamp"
                    }

            }               

    插件:mutate(改變內容)
        The mutate filter allows you to perform general mutations on fields. You can rename, remove, replace, and modify fields in your events.

    示例4:mutate filter插件
        filter {
                grok {
                        match => {
                                "message" => "%{HTTPD_COMBINEDLOG}"
                        }
                }
                date {
                        match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                }
                mutate {
                        rename => {
                                "agent" => "user_agent"
                        }
                }
        } 

    示例5:geoip插件

        filter {
                grok {
                        match => {
                                "message" => "%{HTTPD_COMBINEDLOG}"
                        }
                }
                date {
                        match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                }
                mutate {
                        rename => {
                                "agent" => "user_agent"
                        }
                }
                geoip {
                        source => "clientip"
                        target => "geoip"
                        database => "/etc/logstash/maxmind/GeoLite2-City.mmdb"
                }
        }            
      echo '47.98.120.224 - - [31/May/2018:16:22:58 +0800] "GET / HTTP/1.1" 200 21 "-"  

"curl/7.29.0"' >> /var/log/httpd/access_log 追加httpd日誌,看是否可以查詢到ip地址信息

    示例3:使用Redis
        (1) 從redis加載數據
            input {
                redis {
                    batch_count => 1
                    data_type => "list"
                    key => "logstash-list"
                    host => "192.168.0.2"
                    port => 6379
                    threads => 5
                }
            } 

        (2) 將數據存入redis
            output {
                redis {
                    #data_type => "channel"
                    #key => "logstash-%{+yyyy.MM.dd}"

host => ["192.168.43.66"]
port => 6379
db => 8
data_type => "list"
key => "logstash-%{+YYYY.MMM.dd}"
}
}
/usr/share/logstash/bin/logstash -f ./example6.conf 運行程序將數據輸出到redis中
注意:要刷新數據,在redis中找數據
redis中:
yum install redis
vim /etc/redis.conf
bind 0.0.0.0
systemctl restart redis
help @list 查看list命令
lrange logstash-2018.May.31 0 10 查看一定範圍的數據
keys * 查看有無數據
select 8 切換到8號數據庫
示例4:將數據寫入els cluster

        output {
            elasticsearch {
                hosts => ["http://node1:9200/","http://node2:9200/","http://node3:9200/"]
                user => "ec18487808b6908009d3"
                password => "efcec6a1e0"
                index => "logstash-%{+YYYY.MM.dd}"
                document_type => "apache_logs"
            }
        }        

     示例5:綜合示例,啓用geoip

        input {
            beats {
                port => 5044
            }
        }

        filter {
            grok {
                match => { 
                "message" => "%{COMBINEDAPACHELOG}"
                }
                remove_field => "message"
            }
            geoip {
                source => "clientip"
                target => "geoip"
                database => "/etc/logstash/GeoLite2-City.mmdb"
            }
        }

        output {
            elasticsearch {
                hosts => ["http://172.16.0.67:9200","http://172.16.0.68:9200","http://172.16.0.69:9200"]
                index => "logstash-%{+YYYY.MM.dd}"
                action => "index"
                document_type => "apache_logs"
            }
        }        

    grok:
        %{SYNTAX:SEMANTIC}
            SYNTAX:預定義的模式名稱;
            SEMANTIC:給模式匹配到的文本所定義的鍵名;

            1.2.3.4 GET /logo.jpg  203 0.12
            %{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}

            { clientip: 1.2.3.4, method: GET, request: /logo.jpg, bytes: 203, duration: 0.12}

            %{IPORHOST:client_ip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|-)" %{HOST:domain} %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} "(%{WORD:x_forword}|-)" (%{URIHOST:upstream_host}|-) %{NUMBER:upstream_response} (%{WORD:upstream_cache_status}|-) %{QS:upstream_content_type} (%{BASE16FLOAT:upstream_response_time}) > (%{BASE16FLOAT:request_time})

             "message" => "%{IPORHOST:clientip} \[%{HTTPDATE:time}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<http_referer>\S+)\" \"(?<http_user_agent>\S+)\" \"(?<http_x_forwarded_for>\S+)\""

             filter {
                grok {
                    match => {
                        "message" => "%{IPORHOST:clientip} \[%{HTTPDATE:time}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<http_referer>\S+)\" \"(?<http_user_agent>\S+)\" \"(?<http_x_forwarded_for>\S+)\""
                    }
                    remote_field: message
                }   
            }

            nginx.remote.ip
            [nginx][remote][ip] 

            filter {
                grok {
                    match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx
                    ][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\
                    " %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"
                    %{DATA:[nginx][access][agent]}\""] }
                    remove_field => "message"
                }  
                date {
                    match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
                    remove_field => "[nginx][access][time]"
                }  
                useragent {
                    source => "[nginx][access][agent]"
                    target => "[nginx][access][user_agent]"
                    remove_field => "[nginx][access][agent]"
                }  
                geoip {
                    source => "[nginx][access][remote_ip]"
                    target => "geoip"
                    database => "/etc/logstash/GeoLite2-City.mmdb"
                }  

            }   

            output {                                                                                                     
                elasticsearch {                                                                                      
                    hosts => ["node1:9200","node2:9200","node3:9200"]                                            
                    index => "logstash-ngxaccesslog-%{+YYYY.MM.dd}"                                              
                }                                                                                                    
            }

            注意:
                1、輸出的日誌文件名必須以“logstash-”開頭,方可將geoip.location的type自動設定爲"geo_point";
                2、target => "geoip"

    除了使用grok filter plugin實現日誌輸出json化之外,還可以直接配置服務輸出爲json格式;

    示例:使用grok結構化nginx訪問日誌 
        filter {
                grok {
                        match => {
                                "message" => "%{HTTPD_COMBINEDLOG} \"%{DATA:realclient}\""
                        }
                        remove_field => "message"
                }
                date {
                        match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                        remove_field => "timestamp"
                }
        }            

    示例:使用grok結構化tomcat訪問日誌 
        filter {
                grok {
                        match => {
                                "message" => "%{HTTPD_COMMONLOG}"
                        }
                        remove_field => "message"
                }
                date {
                        match => ["timestamp","dd/MMM/YYYY:H:m:s Z"]
                        remove_field => "timestamp"
                }
        } 

    Nginx日誌Json化:
        log_format   json  '{"@timestamp":"$time_iso8601",'
                    '"@source":"$server_addr",'
                    '"@nginx_fields":{'
                        '"client":"$remote_addr",'
                        '"size":$body_bytes_sent,'
                        '"responsetime":"$request_time",'
                        '"upstreamtime":"$upstream_response_time",'
                        '"upstreamaddr":"$upstream_addr",'
                        '"request_method":"$request_method",'
                        '"domain":"$host",'
                        '"url":"$uri",'
                        '"http_user_agent":"$http_user_agent",'
                        '"status":$status,'
                        '"x_forwarded_for":"$http_x_forwarded_for"'
                    '}'
                '}';

        access_log  logs/access.log  json;                  

Conditionals
Sometimes you only want to filter or output an event under certain conditions. For that, you can use a conditional.

Conditionals in Logstash look and act the same way they do in programming languages. Conditionals support if, else if and else statements and can be nested.

The conditional syntax is:

    if EXPRESSION {
    ...
    } else if EXPRESSION {
    ...
    } else {
    ...
    }    

    What’s an expression? Comparison tests, boolean logic, and so on!

    You can use the following comparison operators:

    equality: ==, !=, <, >, <=, >=
    regexp: =~, !~ (checks a pattern on the right against a string value on the left) inclusion: in, not in

    The supported boolean operators are:

        and, or, nand, xor

    The supported unary operators are:

        !
    Expressions can be long and complex. Expressions can contain other expressions, you can negate expressions with !, and you can group them with parentheses (...).

    filter {

        if [type] == 'tomcat-accesslog' {
            grok {}
        }

        if [type] == 'httpd-accesslog' {
            grok {}
        }

    }

1、lucene索引組件
lucene由3部分組成:
index:對應於db數據,每個索引爲一個db
type:對應於table數據,如每個應用的日誌都是不同的,放在不同的table中
document:對應於row數據,是鍵值對組的存放
mapping:映射,對每個字段key的數據類型進行規定
2、es組件

在存儲索引時採用節點集羣存儲,索引進行分片處理,以增加冗餘度;分片具有主副。在搜索時
通過總線調度到存儲即可,不用在文件中寫死;es組件集成了lucene,是中間的一部分實現

3、es搜索組件的集羣狀態

es的集羣狀態由3中顏色進行表示:
green:所有shard主副片口可以正常使用
yellow:存在某個或某些分片缺少主或副
red:存在某個或某些分片同時缺少主和副
如果發生網絡分區,兩個節點之間不能進行通信了,這時候就會造成腦裂,故而需要進行quorum
投票選擇哪個作爲正常節點正常工作,哪個下線等待。所以此集羣需要奇數個節點

4、logstash插件講解
logstash插件通過輸入插件從指定數據源獲取數據,根據輸出插件將處理過的數據輸出到指定目
標,中間是進行數據過濾的插件,對數據進行文檔化處理等操作;
https://www.elastic.co/guide/en/logstash/5.6/index.html 參考文檔
logstash使用架構:
logstash服務器-->elasticsearch
logstash服務器/filebeat服務器-->redis服務器-->logstash服務器-->elasticsearch

5、logstash架構的實現實驗
logstash服務器/filebeat服務器-->redis服務器-->logstash服務器-->elasticsearch
(1)filebeat的配置 主機ip 192.168.43.61
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-5.x/5.6.8/filebeat-5.6.8-
x86_64.rpm 先到清華的鏡像站點下載filebeat軟件
rpm -ivh filebeat-5.6.8-x86_64.rpm 安裝軟件
cd /etc/filebeat/
vim filebeat.yml
paths:

  • /var/log/httpd/access_log
    #------------------------------- Redis output ----------------------------------
    output.redis:
    enabled: true
    hosts: ["192.168.43.66:6379"]
    db: 6
    datatype: list
    key: filebeat
    systemclt start filebeat
    以上過程實現了filebeat將收集到的數據存入redis
    yum install httpd 日誌來源
    echo ‘<h1>HelloWorld</h1>’ > /var/www/html/index.html
    systemctl start httpd
    echo '223.5.5.5 - - [1/June/2018:14:03:58 +0800] "GET / HTTP/1.1" 200 21 "-" "curl/7.29.0"'

    /var/log/httpd/access_log 創造公網ip的日誌以提供日誌來源
    curl http://172.18.62.61 訪問頁面創造日誌
    (2)redis的配置 主機ip 192.168.43.66
    yum install redis
    vim /etc/redis.conf
    bind 0.0.0.0
    systemctl start redis
    redis-cli 連接redis

(3)logstash服務器配置 主機ip 192.168.43.61
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-5.x/5.6.8/logstash-
5.6.8.rpm
yum install java-1.8.0-openjdk-devel
rpm -ivh logstash-5.6.8.rpm
rpm -ql logstash|grep logstash$
vim /etc/logstash/conf.d/example8.conf
input{
redis {
host => "192.168.43.66"
port => 6379
db => 6
key => "filebeat"
data_type => "list"
threads => 6
}
}

filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
remove_field => "message"
}
date {
match => [ "timestamp", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "timestamp"
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/maxmind/GeoLite2-City.mmdb"
}
}

output{
elasticsearch {
hosts => ["http://server1:9200","http://server2:9200","http://server3:9200"]
index => "logstash-%{+YYYY.MM.dd}"
document_type => "apache_logs"
}
}

/usr/share/logstash/bin/logstash -f ./example1.conf -t 測試是否有語法錯誤
systemctl start logstash
(4)elasticsearch的配置 主機ip 192.168.43.60/62/63
準備工作:關閉防火牆、配置chrony時間同步、用本地文件進行dns解析
https://mirrors.cnnic.cn 清華大學的elastic stack search的鏡像網站,下載速度快
yum install java-1.8.0-openjdk-devel -y
rpm -ivh elasticsearch-5.6.8.rpm java編寫的
scp elasticsearch-5.6.8.rpm server2:/root/ 複製過去後進行rpm安裝
scp elasticsearch-5.6.8.rpm server3:/root/
cd /etc/elasticsearch/
vim elasticsearch.yml
cluster.name: myels
node.name: server1
path.data: /els/data
path.logs: /els/logs 需要在外面創建目錄,設置屬組和屬主爲elasticsearch用戶
network.host: 192.168.43.60
discovery.zen.ping.unicast.hosts: ["server1","server2","server3"]
discovery.zen.minimum_master_nodes: 1 2個節點正常就可以正常使用

vim jvm.options
-Xms1g 注意初始化值和最大值要相同
-Xmx1g
mkdir /els/{data,logs} -pv && chown -R elasticsearch.elasticsearch /els/*
scp elasticsearch.yml jvm.options server2:/etc/elasticsearch/
vim elasticsearch.yml
network.host: 192.168.43.63
node.name: server2
scp elasticsearch.yml jvm.options server3:/etc/elasticsearch/
vim elasticsearch.yml
network.host: 192.168.43.62
node.name: server3
java -version
systemctl daemon-reload && systemctl start elasticsearch
ss -ntl
curl http://server1:9200/ 看測試是否成功
tail /els/logs/myels.log 可以查看日誌找錯誤
free -m 查看內存的大小,以便定虛擬機的初始化值
curl -XGET 'http://server1:9200/_cluster/health?pretty=true' 發起查詢請求
(5)kibana的配置
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-5.x/5.6.8/kibana-5.6.8-
x86_64.rpm 從清華鏡像網站下載kibana圖形界面工具
rpm -ivh kibana-5.6.8-x86_64.rpm 安裝軟件
cd /etc/kibana/
vim kibana.yml
server.port: 5601 監聽端口
server.host: "0.0.0.0" 允許任意主機訪問
server.name: "redis" 主機名
elasticsearch.url: "http://server1:9200" 所連接的數據接口
elasticsearch.preserveHost: true
kibana.index: ".kibana" 一些所以文件的放處
systemclt restart kibana
http://172.18.62.66:5601 在瀏覽器頁面訪問

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章