使用 Docker 快速部署 Elasticsearch 集羣

本文將使用Docker 快速部署Elasticsearch 集羣,使用容器模擬多個實例。

最新的6.x版本似乎不能通過 -Epath.config 參數去指定特定的配置文件位置,文檔說明:

For the archive distributions, the config directory location defaults to $ES_HOME/config. The location of the >config directory can be changed via the ES_PATH_CONF environment variable as follows:
ES_PATH_CONF=/path/to/my/config ./bin/elasticsearch
Alternatively, you can export the ES_PATH_CONF environment variable via the command line or via your shell profile.

即交給環境變量 ES_PATH_CONF 決定加載路徑了,官方文檔,不使用容器部署單機多實例的同學多多注意。

準備工作

安裝 docker & docker-compose

這裏推進使用 daocloud 做個加速安裝:

#docker
curl -sSL https://get.daocloud.io/docker | sh
#docker-compose
curl -L \
https://get.daocloud.io/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` \
> /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

#查看安裝結果
docker-compose -v

數據目錄

#創建數據/日誌目錄 這裏我們部署3個節點
mkdir /opt/elasticsearch/data/{node0,nod1,node2} -p
mkdir /opt/elasticsearch/logs/{node0,nod1,node2} -p
cd /opt/elasticsearch
#權限我也很懵逼啦 給了 privileged 也不行 索性0777好了
chmod 0777 data/* -R && chmod 0777 logs/* -R

#防止JVM報錯
echo vm.max_map_count=262144 >> /etc/sysctl.conf
sysctl -p

docker-compse創建服務

創建編排文件

vim docker-compose.yml

參數說明

- cluster.name=elasticsearch-cluster
集羣名稱

- node.name=node0
- node.master=true
- node.data=true
節點名稱、是否可作爲主節點、是否存儲數據

- bootstrap.memory_lock=true
鎖定進程的物理內存地址避免交換(swapped)來提高性能

- http.cors.enabled=true
- http.cors.allow-origin=*
開啓cors以便使用Head插件

- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
JVM內存大小配置

- "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2"
- "discovery.zen.minimum_master_nodes=2"
由於5.2.1後的版本是不支持多播的,所以需要手動指定集羣各節點的tcp數據交互地址,用於集羣的節點發現failover,默認缺省9300端口,如設定了其它端口需另行指定,這裏我們直接藉助容器通信,也可以將各節點的9300映射至宿主機,通過網絡端口通信。
設定failover選取的quorum = nodes/2 + 1

當然,你也可以掛在配置文件,ES鏡像的配置文件是/usr/share/elasticsearch/config/elasticsearch.yml:

volumes:
  - path/to/local/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro

docker-compose.yml

version: '3'
services:
  elasticsearch_n0:
    image: elasticsearch:6.6.2
    container_name: elasticsearch_n0
    privileged: true
    environment:
      - cluster.name=elasticsearch-cluster
      - node.name=node0
      - node.master=true
      - node.data=true
      - bootstrap.memory_lock=true
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2"
      - "discovery.zen.minimum_master_nodes=2"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/node0:/usr/share/elasticsearch/data
      - ./logs/node0:/usr/share/elasticsearch/logs
    ports:
      - 9200:9200
  elasticsearch_n1:
    image: elasticsearch:6.6.2
    container_name: elasticsearch_n1
    privileged: true
    environment:
      - cluster.name=elasticsearch-cluster
      - node.name=node1
      - node.master=true
      - node.data=true
      - bootstrap.memory_lock=true
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2"
      - "discovery.zen.minimum_master_nodes=2"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/node1:/usr/share/elasticsearch/data
      - ./logs/node1:/usr/share/elasticsearch/logs
    ports:
      - 9201:9200
  elasticsearch_n2:
    image: elasticsearch:6.6.2
    container_name: elasticsearch_n2
    privileged: true
    environment:
      - cluster.name=elasticsearch-cluster
      - node.name=node2
      - node.master=true
      - node.data=true
      - bootstrap.memory_lock=true
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2"
      - "discovery.zen.minimum_master_nodes=2"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/node2:/usr/share/elasticsearch/data
      - ./logs/node2:/usr/share/elasticsearch/logs
    ports:
      - 9202:9200

這裏我們分別爲node0/node1/node2開放宿主機的9200/9201/9202作爲http服務端口,各實例的tcp數據傳輸用默認的9300通過容器管理通信。

如果需要多機部署,則將EStransport.tcp.port: 9300端口映射至宿主機xxxx端口,discovery.zen.ping.unicast.hosts填寫各主機代理的地址即可:

#比如其中一臺宿主機爲192.168.1.100
    ...
    - "discovery.zen.ping.unicast.hosts=192.168.1.100:9300,192.168.1.101:9300,192.168.1.102:9300"
    ...
ports:
  ...
  - 9300:9300

創建並啓動服務

[root@localhost elasticsearch]# docker-compose up -d
[root@localhost elasticsearch]# docker-compose ps
      Name                    Command               State                Ports              
--------------------------------------------------------------------------------------------
elasticsearch_n0   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch_n1   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9201->9200/tcp, 9300/tcp
elasticsearch_n2   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9202->9200/tcp, 9300/tcp

#啓動失敗查看錯誤
[root@localhost elasticsearch]# docker-compose logs
#最多是一些訪問權限/JVM vm.max_map_count 的設置問題

查看集羣狀態

192.168.20.6 是我的服務器地址

訪問http://192.168.20.6:9200/_cat/nodes?v即可查看集羣狀態:

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.25.0.3           36          98  79    3.43    0.88     0.54 mdi       *      node0
172.25.0.2           48          98  79    3.43    0.88     0.54 mdi       -      node2
172.25.0.4           42          98  51    3.43    0.88     0.54 mdi       -      node1

驗證 Failover

通過集羣接口查看狀態

模擬主節點下線,集羣開始選舉新的主節點,並對數據進行遷移,重新分片。

[root@localhost elasticsearch]# docker-compose stop elasticsearch_n0
Stopping elasticsearch_n0 ... done

集羣狀態(注意換個http端口 原主節點下線了),down掉的節點還在集羣中,等待一段時間仍未恢復後就會被剔出

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.25.0.2           57          84   5    0.46    0.65     0.50 mdi       -      node2
172.25.0.4           49          84   5    0.46    0.65     0.50 mdi       *      node1
172.25.0.3                                                       mdi       -      node0

等待一段時間

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.25.0.2           44          84   1    0.10    0.33     0.40 mdi       -      node2
172.25.0.4           34          84   1    0.10    0.33     0.40 mdi       *      node1

恢復節點 node0

[root@localhost elasticsearch]# docker-compose start elasticsearch_n0
Starting elasticsearch_n0 ... done

等待一段時間

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.25.0.2           52          98  25    0.67    0.43     0.43 mdi       -      node2
172.25.0.4           43          98  25    0.67    0.43     0.43 mdi       *      node1
172.25.0.3           40          98  46    0.67    0.43     0.43 mdi       -      node0

配合 Head 插件觀察

集羣狀態圖示更容易看出數據自動遷移的過程

1、集羣正常 數據安全分佈在3個節點上

clipboard.png

2、下線 node1 主節點 集羣開始遷移數據

遷移中
clipboard.png

遷移完成
clipboard.png

3、恢復 node1 節點

clipboard.png

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章