redis集羣(redis + cluster + sentinel)

redis集羣(redis + cluster + sentinel)

  1. 概述說明

     

    說明:本次實驗採用c1、c2、c3三臺虛擬機完成,每臺服務器上都部署一個master、一個slave和一個sentinel。當某主節點的掛了,相應的從節點替位;當某主節點及主節點對應的從節點同時掛了,將造成數據的丟失!故生產環境都採用一主多從的集羣模式!

  2. 搭建環境

    服務器信息如下:

    c1 192.168.10.11

    c2 192.168.10.12

    c3 192.168.10.13

     

    每臺需要部署redis的服務器上配置系統參數,執行以下腳本

    # cat xitongcanshu.sh

    #!/bin/bash

    echo 'net.core.somaxconn=512' >> /etc/sysctl.conf

    echo 'vm.overcommit_memory=1' >> /etc/sysctl.conf

    echo never > /sys/kernel/mm/transparent_hugepage/enabled

    echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local

    chmod +x /etc/rc.d/rc.local

    sysctl -p

    cat >> /etc/security/limits.conf << EOF

    * soft nofile 65535

    * hard nofile 65535

    * soft nproc 65535

    * hard nproc 65535

    EOF

    ulimit -SHn 65535

    ulimit -n

  3. redis

    說明:本次實驗在c1、c2、c3三臺服務器上分別部署一個master和一個slave!

    [root@c1 ~]# wget http://download.redis.io/releases/redis-4.0.9.tar.gz

    [root@c1 ~]# tar xf redis-4.0.9.tar.gz -C /usr/local/

    [root@c1 ~]# ln -sv /usr/local/redis-4.0.9 /usr/local/redis

    [root@c1 ~]# yum -y install tcl openssl-devel zlib-devel

    [root@c1 /usr/local/redis]# make && make PREFIX=/usr/local/redis-4.0.9/ install

    [root@c1 /usr/local/redis]# make test

    [root@c1 ~]# mkdir -pv /etc/redis-cluster/{7001,7002} /var/log/redis

    # 拷貝默認配置文件並根據需求修改,因實驗環境,故採用簡潔配置,如下:

    [root@c1 ~]# cat /etc/redis-cluster/7001/redis.conf

    port 7001

    bind 192.168.10.11

    cluster-enabled yes

    cluster-config-file /etc/redis-cluster/7001/nodes.conf

    logfile /var/log/redis/redis_7001.log

    cluster-node-timeout 5000

    appendonly yes

    daemonize yes

    [root@c1 ~]# cat /etc/redis-cluster/7002/redis.conf

    port 7002

    bind 192.168.10.12

    cluster-enabled yes

    cluster-config-file /etc/redis-cluster/7002/nodes.conf

    logfile /var/log/redis/redis_7002.log

    cluster-node-timeout 5000

    appendonly yes

    daemonize yes

     

    # 啓動

    [root@c1 ~]# /usr/local/redis/bin/redis-server /etc/redis-cluster/7001/redis.conf

    [root@c1 ~]# /usr/local/redis/bin/redis-server /etc/redis-cluster/7002/redis.conf

     

    # c2、c3服務器也進行以上操作,配置只需修改對應的ip即可!

  4. cluster

    說明:本次實驗主從關係如下:

    c1 master --- c2 slave M/S 1

    c2 master --- c3 slave M/S 2

    c3 master --- c1 slave M/S 3

     

    # 說明:框內內容是編譯安裝ruby工具,以便能成功執行redis-trib.rb創建集羣,;yum方式安裝ruby可能版本較低!

    [root@c1 ~]# wget https://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.1.tar.gz

    [root@c1 ~]# wget https://rubygems.org/downloads/redis-4.0.0.gem

    [root@c1 ~]# tar -xf ruby-2.5.1.tar.gz -C /usr/local/

    [root@c1 /usr/local/ruby-2.5.1]# ./configure -prefix=/usr/local/ruby-2.5.1

    [root@c1 /usr/local/ruby-2.5.1]# make && make install

    [root@c1 ~]# ln -sv /usr/local/ruby-2.5.1/bin/gem /usr/bin/gem

    [root@c1 ~]# ln -sv /usr/local/ruby-2.5.1/bin/ruby /usr/bin/ruby

    # 在配置文件/usr/local/ruby-2.5.1/ext/openssl/Makefile和/usr/local/ruby-2.5.1/ext/zlib/Makefile中的定義變量處加上如下紅色行,若存在相應變量的定義請註釋;

    srcdir = .

    top_srcdir = ../..

    topdir = /usr/local/ruby-2.5.1/include/ruby-2.5.0

    [root@c1 /usr/local/ruby-2.5.1/ext/zlib]# ruby extconf.rb

    [root@c1 /usr/local/ruby-2.5.1/ext/zlib]# make && make install

    [root@c1 /usr/local/ruby-2.5.1/ext/openssl]# ruby extconf.rb

    [root@c1 /usr/local/ruby-2.5.1/ext/openssl]# make && make install

    [root@c1 ~]# gem install -l redis-4.0.0.gem

     

    # 創建集羣,格式:主 主 主 從 從 從

    [root@c1 ~]# /usr/local/redis/src/redis-trib.rb create --replicas 1 192.168.10.11:7001 192.168.10.12:7001 192.168.10.13:7001 192.168.10.12:7002 192.168.10.13:7002 192.168.10.11:7002

    >>> Creating cluster

    >>> Performing hash slots allocation on 6 nodes...

    Using 3 masters:

    192.168.10.11:7001

    192.168.10.12:7001

    192.168.10.13:7001

    Adding replica 192.168.10.12:7002 to 192.168.10.11:7001

    Adding replica 192.168.10.13:7002 to 192.168.10.12:7001

    Adding replica 192.168.10.11:7002 to 192.168.10.13:7001

    M: 440541e2a3235205bf190336a1f37f127d18bf60 192.168.10.11:7001

    slots:0-5460 (5461 slots) master

    M: c588a93825de6e0e6730a8bbb072684619201803 192.168.10.12:7001

    slots:5461-10922 (5462 slots) master

    M: 9ba21cfda0fed2d9013103e934f199a247c378ef 192.168.10.13:7001

    slots:10923-16383 (5461 slots) master

    S: f07abd56170635aaad5166bd38af9f7267834ca7 192.168.10.12:7002

    replicates 440541e2a3235205bf190336a1f37f127d18bf60

    S: 1aa03c91fc62ac72aeccf349d040f32ae190120b 192.168.10.13:7002

    replicates c588a93825de6e0e6730a8bbb072684619201803

    S: ff7e453f9ad5d2db2c7867893700fec033767bd9 192.168.10.11:7002

    replicates 9ba21cfda0fed2d9013103e934f199a247c378ef

    Can I set the above configuration? (type 'yes' to accept): yes

    >>> Nodes configuration updated

    >>> Assign a different config epoch to each node

    >>> Sending CLUSTER MEET messages to join the cluster

    Waiting for the cluster to join..

    >>> Performing Cluster Check (using node 192.168.10.11:7001)

    M: 440541e2a3235205bf190336a1f37f127d18bf60 192.168.10.11:7001

    slots:0-5460 (5461 slots) master

    1 additional replica(s)

    S: 1aa03c91fc62ac72aeccf349d040f32ae190120b 192.168.10.13:7002

    slots: (0 slots) slave

    replicates c588a93825de6e0e6730a8bbb072684619201803

    S: ff7e453f9ad5d2db2c7867893700fec033767bd9 192.168.10.11:7002

    slots: (0 slots) slave

    replicates 9ba21cfda0fed2d9013103e934f199a247c378ef

    M: 9ba21cfda0fed2d9013103e934f199a247c378ef 192.168.10.13:7001

    slots:10923-16383 (5461 slots) master

    1 additional replica(s)

    M: c588a93825de6e0e6730a8bbb072684619201803 192.168.10.12:7001

    slots:5461-10922 (5462 slots) master

    1 additional replica(s)

    S: f07abd56170635aaad5166bd38af9f7267834ca7 192.168.10.12:7002

    slots: (0 slots) slave

    replicates 440541e2a3235205bf190336a1f37f127d18bf60

    [OK] All nodes agree about slots configuration.

    >>> Check for open slots...

    >>> Check slots coverage...

    [OK] All 16384 slots covered.

     

    # 查看節點信息

    [root@c1 ~]# /usr/local/redis/bin/redis-cli -h 192.168.10.13 -p 7001

    192.168.10.13:7001> cluster nodes

    ff7e453f9ad5d2db2c7867893700fec033767bd9 192.168.10.11:7002@17002 slave 9ba21cfda0fed2d9013103e934f199a247c378ef 0 1527578162996 6 connected

    1aa03c91fc62ac72aeccf349d040f32ae190120b 192.168.10.13:7002@17002 slave c588a93825de6e0e6730a8bbb072684619201803 0 1527578161483 5 connected

    440541e2a3235205bf190336a1f37f127d18bf60 192.168.10.11:7001@17001 master - 0 1527578162000 1 connected 0-5460

    f07abd56170635aaad5166bd38af9f7267834ca7 192.168.10.12:7002@17002 slave 440541e2a3235205bf190336a1f37f127d18bf60 0 1527578161000 4 connected

    c588a93825de6e0e6730a8bbb072684619201803 192.168.10.12:7001@17001 master - 0 1527578162491 2 connected 5461-10922

    9ba21cfda0fed2d9013103e934f199a247c378ef 192.168.10.13:7001@17001 myself,master - 0 1527578162000 3 connected 10923-16383

  5. sentinel

    前言

        Redis-Sentinel是官方推薦的高可用(HA)解決方案,本身也是一個獨立運行的進程,它能監控多個master-slave集羣。爲防止單點故障,可對sentinel進行集羣化。其主要功能如下:

    1. 監控:sentinel不斷的檢查master和slave的活性;

    2. 通知:當發現redis節點故障,可通過API發出通知;

    3. 自動故障轉移:當一個master節點故障時,能夠從衆多slave中選舉一個作爲新的master,同時其它slave節點也將自動將所追隨的master的地址改爲新master的地址;

    4. 配置提供者:哨兵作爲redis客戶端發現的權威來源:客戶端連接到哨兵請求當前可靠的master地址,若發生故障,哨兵將報告新地址。

     

    配置

    說明:本次實驗將三個哨兵分別部署在c1、c2、c3三臺服務器上!

    [root@c1 ~]# cp /usr/local/redis/sentinel.conf /etc/redis-cluster/

    [root@c1 ~]# cat /etc/redis-cluster/sentinel.conf

    protected-mode no

    port 27001

    daemonize yes

    logfile "/var/log/sentinel.log"

    sentinel monitor mymaster1 192.168.10.11 7001 2

    sentinel monitor mymaster2 192.168.10.12 7001 2

    sentinel monitor mymaster3 192.168.10.13 7001 2

    sentinel down-after-milliseconds mymaster1 10000

    sentinel down-after-milliseconds mymaster2 10000

    sentinel down-after-milliseconds mymaster3 10000

    sentinel parallel-syncs mymaster1 1

    sentinel parallel-syncs mymaster2 1

    sentinel parallel-syncs mymaster3 1

    sentinel failover-timeout mymaster1 15000

    sentinel failover-timeout mymaster2 15000

    sentinel failover-timeout mymaster3 15000

     

    # 啓動哨兵

    [root@c1 ~]# /usr/local/redis/bin/redis-sentinel /etc/redis-cluster/sentinel.conf

     

    # c2、c3服務器也進行以上操作,配置只需修改對應的端口即可!

     

    說明:啓動哨兵後,配置文件會根據監控信息自動發生相應的變化,如下:

    [root@c1 ~]# cat /etc/redis-cluster/sentinel.conf

    protected-mode no

    port 27001

    daemonize yes

    logfile "/var/log/sentinel.log"

    sentinel myid e3733670b609b65e520b293789e4fbf10236089c

    sentinel monitor mymaster3 192.168.10.13 7001 2

    sentinel down-after-milliseconds mymaster3 10000

    sentinel failover-timeout mymaster3 15000

    sentinel config-epoch mymaster3 0

    sentinel leader-epoch mymaster3 0

    sentinel known-slave mymaster3 192.168.10.11 7002

    sentinel monitor mymaster1 192.168.10.11 7001 2

    sentinel down-after-milliseconds mymaster1 10000

    sentinel failover-timeout mymaster1 15000

    sentinel config-epoch mymaster1 0

    sentinel leader-epoch mymaster1 0

    # Generated by CONFIG REWRITE

    dir "/etc/redis-cluster"

    sentinel known-slave mymaster1 192.168.10.12 7002

    sentinel monitor mymaster2 192.168.10.12 7001 2

    sentinel down-after-milliseconds mymaster2 10000

    sentinel failover-timeout mymaster2 15000

    sentinel config-epoch mymaster2 0

    sentinel leader-epoch mymaster2 0

    sentinel known-slave mymaster2 192.168.10.13 7002

    sentinel current-epoch 0

  6. 故障模擬

    # 通過sentinel日誌查看信息,圖中可看出192.168.10.11 7001爲 master,其相應的從爲192.168.10.12 7002;

     

    # 手動停止master 192.168.10.11 7001,slave 192.168.10.12 7002升爲主

    [root@c1 ~]# ps -ef |grep redis

    root 4243 1 0 03:05 ? 00:00:23 /usr/local/redis/bin/redis-server 192.168.10.11:7001 [cluster]

    root 4245 1 0 03:05 ? 00:00:23 /usr/local/redis/bin/redis-server 192.168.10.11:7002 [cluster]

    root 8472 1 1 03:45 ? 00:00:07 /usr/local/redis/bin/redis-sentinel *:27001 [sentinel]

    [root@c1 ~]# kill 4243

    # 日誌輸出如下:

     

    # 登陸查看nodes信息

    # /usr/local/redis/bin/redis-cli -h 192.168.10.13 -p 7001

     

    # 將192.168.10.11 7001的redis啓動

    [root@c1 ~]# /usr/local/redis/bin/redis-server /etc/redis-cluster/7001/redis.conf

     

    # sentinel刷新日誌

     

    # 再次查看nodes信息,192.168.10.11 7001 已經變爲192.168.10.12 7002的slave

  7. 報錯說明

    問題描述

        在執行命令# /usr/local/redis/src/redis-trib.rb create --replicas 1 *** 創建集羣時報錯如下:

    >>> Creating cluster

    [ERR] Node 192.168.10.11:7001 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

     

    問題解決

    1. 將節點下aof、rdb等本地備份刪除;

    2. 刪除node集羣配置文件,即redis.conf中cluster-config-file對應的文件;

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章