簡介
twemproxy,也叫nutcraker。是一個twtter開源的一個redis 和memcache 快速/輕量級代理服務器;Twemproxy是一個快速的單線程代理程序,支持Memcached ASCII協議和更新的Redis協議
Twemproxy 通過引入一個代理層,可以將其後端的多臺 Redis 或 Memcached 實例進行統一管理與分配,使應用程序只需要在 Twemproxy 上進行操作,而不用關心後面具體有多少個真實的 Redis 或 Memcached 存儲。
本次使用redis+sentinel+twemproxy實現如下功能:
1.前端使用twemproxy (主備節點)做代理,將其後端的多臺Redis實例分片進行統一管理與分配;
2.每一個分片節點的redis slave 都是redis master的副本且只讀;
3.redis sentinel 持續不斷的監控每個分片節點的master,當master出現故障且不可用狀態時,sentinel 會通知/啓動自動故障轉移等動作;
4.sentinel 可以在發生故障轉移動作後觸發相應腳本(通過 client-reconfig-script 參數配置 ),腳本獲取到最新的master來修改 twemproxy 配置並重啓 twemproxy;
架構圖
集羣名 | 角色 | IP | 端口 | 組件 |
redis1 | master | 10.20.10.43 | 6379 | redis+sentinel |
slave | 10.20.10.44 | 6380 | ||
redis2 | master | 10.20.10.44 | 6379 | redis+sentinel |
slave | 10.20.10.45 | 6380 | ||
redis3 | master | 10.20.10.45 | 6379 | redis+sentinel |
slave | 10.20.10.43 | 6380 | ||
代理 | proxy1 | 10.20.10.46 | 22121 | twemproxy+sentinel |
proxy2 | 10.20.10.47 | 22121 | ||
proxy3 | 10.20.10.48 | 22121 |
如上圖所示:
1.在43/44/45上分別安裝redis+sentinel,並且3臺機器之間兩兩做主從,以保證任何一臺機器出問題不會影響整個集羣。
2.在46/47/48上分別安裝twemproxy+sentinel,當3組redis主從中,sentinle會監控每組主從中的master,當master失效sentinel會進行failover,並且sentinel會通過 client-reconfig-script 來觸發client-reconfig.sh,將twemproxy代理的羣組切換爲新的master並重啓,以使twemproxy對外正常提供服務。
此處由於在每臺服務器上同時存在主從、proxy與redis隔離,爲便於後續自動化安裝,我們統一定義master的端口爲6379、slave的端口爲6380,proxy上的sentinel單獨安裝。
安裝思路
1.變量定義
我們將需要安裝的服務器分爲3個角色:master、slave、proxy,在master和slave上安裝redis+sentinel;在proxy上安裝sentinel+twemproxy;由於master、slave和proxy可能使用應用用戶啓動而非root用戶啓動,因此我們將安裝路徑、啓動用戶及其他都提前定義到變量中;
2.安裝腳本
我們使用redis_install.sh在master和slave上安裝redis、sentinel並啓動服務;在proxy安裝sentinel並啓動服務;
我們使用twemproxy.sh在proxy上安裝twemproxy,並啓動應用;另我們還使用此腳本設置sentinel的 client-reconfig-script 來設置切換腳本client-reconfig.sh
playbook的目錄結構
├── hosts
├── redis_cluster.yml
└── roles
└── redis_cluster_install
├── files
│ └── redis
│ ├── autoconf-2.69.tar.gz
│ ├── automake-1.15.tar.gz
│ ├── libtool-2.4.6.tar.gz
│ ├── redis-3.2.9.tar.gz
│ ├── redis.conf
│ ├── redis.conf.bak
│ ├── sentinel.conf
│ ├── sentinel.conf.bak
│ └── twemproxy-master.zip
├── handlers
├── meta
├── tasks
│ └── main.yml
├── templates
│ ├── client-reconfig.sh
│ ├── redis_install.sh
│ └── twemproxy_install.sh
└── vars
└── main.yml
說明:
1.file中的autoconf-2.69.tar.gz、automake-1.15.tar.gz、libtool-2.4.6.tar.gz、twemproxy-master.zip是安裝twemproxy的源碼文件;redis.conf、sentinel.conf是我們提前定義好的模板配置文件,以便我們進行變量替換;
2.template中的redis_install.sh是在master、slave、proxy上分別安裝redis主從、sentinle的腳本;client-reconfig.sh是sentinel中client-reconfig-script 設置的切換腳本;twemproxy_install.sh是twemproxy的安裝腳本
具體操作
1.創建資產hosts
vim hosts
[redis_master]
10.20.10.43 ansible_ssh_user=root ansible_ssh_pass=root1234
10.20.10.44 ansible_ssh_user=root ansible_ssh_pass=root1234
10.20.10.45 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_slave1]
10.20.10.43 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_slave2]
10.20.10.44 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_slave3]
10.20.10.45 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_proxy]
10.20.10.46 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_proxy]
10.20.10.47 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_proxy]
10.20.10.48 ansible_ssh_user=root ansible_ssh_pass=root1234
注意:此處的slave1、slave、slave3沒有特別含義,可隨意指定;具體的對應關係我們是在下面的角色文件中定義。
2.創建角色文件
vim redis_cluster.yml
#角色爲master
- hosts: redis_master
remote_user: root
gather_facts: False
roles:
- {role: redis_cluster_install,redis_role: master}
#角色爲slave,隸屬集羣3
- hosts: redis_slave1
remote_user: root
gather_facts: False
roles:
- {role: redis_cluster_install,redis_role: slave,cluster_no: 3}
#角色爲slave,隸屬集羣1
- hosts: redis_slave2
remote_user: root
gather_facts: False
roles:
- {role: redis_cluster_install,redis_role: slave,cluster_no: 1}
#角色爲slave,隸屬集羣2
- hosts: redis_slave3
remote_user: root
gather_facts: False
roles:
- {role: redis_cluster_install,redis_role: slave,cluster_no: 2}
#角色爲proxy
- hosts: redis_proxy
remote_user: root
gather_facts: False
roles:
- {role: redis_cluster_install,redis_role: proxy}
其中:
slave1的ip爲10.20.10.43,我們根據架構圖中的表格定義屬於數據集羣redis3,因此cluster_no爲3
slave1的ip爲10.20.10.44,我們根據架構圖中的表格定義屬於數據集羣redis1,因此cluster_no爲1
slave1的ip爲10.20.10.45,我們根據架構圖中的表格定義屬於數據集羣redis3,因此cluster_no爲2
3.創建變量文件
vim roles/redis_cluster/vars/main.yml
#master和slave上的啓動用戶及安裝目錄、源碼目錄;app爲應用用戶家目錄
#redis server dir
redis_user: app
install_dir: /home/ap/app
source_dir: /home/ap/app/src/
#proxy上的啓動用戶及安裝目錄、源碼目錄;appwb爲應用用戶家目錄
#proxy server dir
proxy_user: appwb
proxy_install_dir: /home/ap/appwb
proxy_source_dir: /home/ap/appwb/src/
#redis主從的端口
redis_master_port: 6379
redis_slave_port: 6380
maxmemory: 2gb
#sentinel的端口及quorum數量
sen_port: 26379
sen_quorum: 3
#集羣列表
cluster1:
- masterip: 10.20.10.43
sen_mastername: redis1
cluster2:
- masterip: 10.20.10.44
sen_mastername: redis2
cluster3:
- masterip: 10.20.10.45
sen_mastername: redis3
#twemproxy端口
tw_port: 22121
如下我們根據角色文件中的cluster_no,分別列舉了3組集羣的名稱和主機ip,以用於twemproxy
4.創建任務文件
#在master和slave上拷貝一份安裝源碼用於安裝redis,在此以master爲主
- name: copy redis dir to client
copy: src=redis dest={{source_dir}} owner=root group=root
when: redis_role == "master"
#在proxy上貝一份安裝源碼用於安裝redis
- name: copy redis dir to client
copy: src=redis dest={{proxy_source_dir}} owner=root group=root
when: redis_role == "proxy"
#在主或從上拷貝redis_install.sh
- name: copy redis_install script to client
template: src=redis_install.sh dest={{source_dir}}/redis/ owner=root group=root mode=0775
when: redis_role == "master" or redis_role == "slave"
#在proxy上拷貝redis_install.sh
- name: copy redis_install script to client
template: src=redis_install.sh dest={{proxy_source_dir}}/redis/ owner=root group=root mode=0775
when: redis_role == "proxy"
#在主或從上安裝redis_install.sh
- name: install redis and sentinel
shell: bash {{source_dir}}/redis/redis_install.sh
when: redis_role == "master" or redis_role == "slave"
#在proxy上安裝redis_install.sh
- name: install redis and sentinel
shell: bash {{proxy_source_dir}}/redis/redis_install.sh
when: redis_role == "proxy"
#在proxy上拷貝client-reconfig.sh
- name: copy client-reconfig script to client
template: src=client-reconfig.sh dest={{proxy_install_dir}}/redis/ owner={{proxy_user}} group={{proxy_user}} mode=0775
when: redis_role == "proxy"
#在proxy上拷貝twemproxy_install.sh
- name: copy twemproxy_install script to client
template: src=twemproxy_install.sh dest={{proxy_source_dir}}/redis/ owner=root group=root mode=0775
when: redis_role == "proxy"
#在proxy上安裝twemproxy_install.sh
- name: install twemproxy
shell: bash {{proxy_source_dir}}/redis/twemproxy_install.sh owner={{proxy_user}} group={{proxy_user}} mode=0775
when: redis_role == "proxy"
注意:我們爲什麼在每個task中都要指定相應的節點角色?
因爲我們在redis_install.sh中是根據相應的master、slave、proxy角色來安裝的,雖然redis_install.sh的名字一樣,但是由於角色的不同,從ansible服務器端推送至客戶端的redis_install.sh的腳本內容也是不同的。因此我們根據不同的角色推送,實現在同一個腳本中實現不同的角色安裝配置。
5.創建模板腳本
(1)vim templates/redis_install.sh
#!/bin/bash
#author: yanggd
#content: install redis and sentinel
#redis目錄相關
source_dir={{source_dir}}
install_dir={{install_dir}}
redis_user={{redis_user}}
#proxy目錄相關
proxy_source_dir={{proxy_source_dir}}
proxy_install_dir={{proxy_install_dir}}
proxy_user={{proxy_user}}
#redis內存
maxmemory={{maxmemory}}
#redis主從端口
redis_master_port={{redis_master_port}}
redis_slave_port={{redis_slave_port}}
#sentinel端口及quorum數量
sen_port={{sen_port}}
sen_quorum={{sen_quorum}}
#在master上安裝redis
{% if redis_role == "master" %}
yum install make gcc gcc-c++ -y
id $redis_user &> /dev/null
if [ $? -ne 0 ];then
useradd -d $install_dir $redis_user
fi
#install redis
cd $source_dir/redis
tar -zxf redis-3.2.9.tar.gz
cd redis-3.2.9
make MALLOC=libc
make PREFIX=$install_dir/redis install
#init redis dir
mkdir $install_dir/redis/{data,conf,logs}
{% endif %}
#在proxy上安裝redis
{% if redis_role == "proxy" %}
yum install make gcc gcc-c++ -y
id $proxy_user &> /dev/null
if [ $? -ne 0 ];then
useradd -d $proxy_install_dir $proxy_user
fi
#install redis
cd $proxy_source_dir/redis
tar -zxf redis-3.2.9.tar.gz
cd redis-3.2.9
make MALLOC=libc
make PREFIX=$proxy_install_dir/redis install
#init redis dir
mkdir $proxy_install_dir/redis/{data,conf,logs}
#change install_dir owner
chown -R $proxy_user.$proxy_user $proxy_install_dir
{% endif %}
#獲取監聽的ip地址
ip=`ifconfig eth1|grep "inet addr"|awk '{print $2}'|awk -F: '{print $2}'`
#根據模板配置文件替換變量
#modify redis conf
cd $install_dir/redis/conf
#配置master配置文件
{% if redis_role == "master" %}
cp $source_dir/redis/redis.conf redis_$redis_master_port.conf
sed -i "s:bind:bind $ip:g" redis_$redis_master_port.conf
sed -i "s:port:port $redis_master_port:g" redis_$redis_master_port.conf
sed -i "s:pidfile:pidfile '$install_dir/redis/redis_$redis_master_port.pid':g" redis_$redis_master_port.conf
sed -i "s:logfile:logfile '$install_dir/redis/logs/redis_$redis_master_port.log':g" redis_$redis_master_port.conf
sed -i "s:dump:redis_$redis_master_port:g" redis_$redis_master_port.conf
sed -i "s:dir:dir '$install_dir/redis/data/':g" redis_$redis_master_port.conf
sed -i "s:memsize:$maxmemory:g" redis_$redis_master_port.conf
#change install_dir owner
chown -R $redis_user.$redis_user $install_dir
#modify kernel
cat > /etc/sysctl.conf << EOF
net.core.somaxconn = 10240
vm.overcommit_memory = 1
EOF
sysctl -p
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
#啓動master
#start redis master
su $redis_user -c "$install_dir/redis/bin/redis-server $install_dir/redis/conf/redis_$redis_master_port.conf"
{% endif %}
#配置slave配置文件
{% if redis_role == "slave" %}
cp $source_dir/redis/redis.conf redis_$redis_slave_port.conf
sed -i "s:bind:bind $ip:g" redis_$redis_slave_port.conf
sed -i "s:port:port $redis_slave_port:g" redis_$redis_slave_port.conf
sed -i "s:pidfile:pidfile '$install_dir/redis/redis_$redis_slave_port.pid':g" redis_$redis_slave_port.conf
sed -i "s:logfile:logfile '$install_dir/redis/logs/redis_$redis_slave_port.log':g" redis_$redis_slave_port.conf
sed -i "s:dump:redis_$redis_slave_port:g" redis_$redis_slave_port.conf
sed -i "s:dir:dir '$install_dir/redis/data/':g" redis_$redis_slave_port.conf
sed -i "s:memsize:$maxmemory:g" redis_$redis_slave_port.conf
#根據角色中cluster_no找到對應的master
{% if cluster_no == 1 %}
{% for i in cluster1 %}
echo "slaveof {{i.masterip}} $redis_master_port" >> redis_$redis_slave_port.conf
{% endfor %}
{% endif %}
{% if cluster_no == 2 %}
{% for i in cluster2 %}
echo "slaveof {{i.masterip}} $redis_master_port" >> redis_$redis_slave_port.conf
{% endfor %}
{% endif %}
{% if cluster_no == 3 %}
{% for i in cluster3 %}
echo "slaveof {{i.masterip}} $redis_master_port" >> redis_$redis_slave_port.conf
{% endfor %}
{% endif %}
#change install_dir owner
chown -R $redis_user.$redis_user $install_dir
#啓動slave服務
#start redis slave
su $redis_user -c "$install_dir/redis/bin/redis-server $install_dir/redis/conf/redis_$redis_slave_port.conf"
{% endif %}
#配置哨兵監控3組主從
#modify sentinel conf
{% if redis_role == "master" %}
cd $install_dir/redis/conf
cp $source_dir/redis/sentinel.conf sentinel.conf
{%for i in cluster1 %}
sed -i "s:sen_port:$sen_port:g" sentinel.conf
sed -i "s:pidfile:pidfile '$install_dir/redis/sentinel.pid':g" sentinel.conf
sed -i "s:dir:dir '$install_dir/redis/data/':g" sentinel.conf
sed -i "s:logfile:logfile '$install_dir/redis/logs/sentinel.log':g" sentinel.conf
sed -i "s:master1_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master1_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster2 %}
sed -i "s:master2_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master2_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster3 %}
sed -i "s:master3_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master3_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
#change install_dir owner
chown -R $redis_user.$redis_user $install_dir
#啓動哨兵服務
#start sentinel
su $redis_user -c "$install_dir/redis/bin/redis-sentinel $install_dir/redis/conf/sentinel.conf"
{% endif %}
#在proxy上配置哨兵
{% if redis_role == "proxy" %}
cd $proxy_install_dir/redis/conf
cp $proxy_source_dir/redis/sentinel.conf sentinel.conf
{%for i in cluster1 %}
sed -i "s:sen_port:$sen_port:g" sentinel.conf
sed -i "s:pidfile:pidfile '$proxy_install_dir/redis/sentinel.pid':g" sentinel.conf
sed -i "s:dir:dir '$proxy_install_dir/redis/data/':g" sentinel.conf
sed -i "s:logfile:logfile '$proxy_install_dir/redis/logs/sentinel.log':g" sentinel.conf
sed -i "s:master1_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master1_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster2 %}
sed -i "s:master2_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master2_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster3 %}
sed -i "s:master3_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master3_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
#change install_dir owner
chown -R $proxy_user.$proxy_user $proxy_install_dir
#在proxy上啓動哨兵
su $proxy_user -c "$proxy_install_dir/redis/bin/redis-sentinel $proxy_install_dir/redis/conf/sentinel.conf"
{% endif %}
(2)vim client-reconfig.sh
#!/bin/bash
#content: modify twemproxy when sentinel failover
#<master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
proxy_install_dir={{proxy_install_dir}}
monitor_name="$1"
master_old_ip="$4"
master_old_port="$5"
master_new_ip="$6"
master_new_port="$7"
tw_bin=$proxy_install_dir/twemproxy/sbin/nutcracker
tw_conf=$proxy_install_dir/twemproxy/conf/nutcracker.yml
tw_log=$proxy_install_dir/twemproxy/logs/twemproxy.log
tw_cmd="$tw_bin -c $tw_conf -o $tw_log -v 11 -d"
#modify twemproxy conf
sed -i "s/${master_old_ip}:${master_old_port}/${master_new_ip}:${master_new_port}/g" $tw_conf
#kill twemproxy
ps -ef|grep twemproxy|grep -v grep|awk '{print $2}'|xargs kill
#start twemproxy
$tw_cmd
sleep 1
ps -ef|grep twemproxy|grep -v grep
(3)vim twemproxy_install.sh
#!/bin/bash
#author: yanggd
#content: install twemproxy and add sentinel client-reconfig.sh
#在proxy上安裝twemproxy
proxy_source_dir={{proxy_source_dir}}
proxy_install_dir={{proxy_install_dir}}
sen_port={{sen_port}}
redis_master_port={{redis_master_port}}
#redis
proxy_user={{proxy_user}}
redis_master_port={{redis_master_port}}
#twemproxy
tw_port={{tw_port}}
#install twemproxy
cd $proxy_source_dir/redis
tar -zxf autoconf-2.69.tar.gz
cd autoconf-2.69
./configure
make && make install
cd ..
tar -zxf automake-1.15.tar.gz
cd automake-1.15
./configure
make && make install
cd ..
tar -zxf libtool-2.4.6.tar.gz
cd libtool-2.4.6
./configure
make && make install
cd ..
unzip twemproxy-master.zip
cd twemproxy-master
aclocal
autoreconf -f -i -Wall,no-obsolete
./configure --prefix=$proxy_install_dir/twemproxy
make && make install
#init twemproxy
mkdir -p $proxy_install_dir/twemproxy/{conf,logs}
ip=`ifconfig eth1|grep "inet addr"|awk '{print $2}'|awk -F: '{print $2}'`
#生成twemproxy的配置文件
cat >> $proxy_install_dir/twemproxy/conf/nutcracker.yml <<EOF
redis_proxy:
listen: $ip:$tw_port
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
server_retry_timeout: 2000
server_failure_limit: 3
servers:
- masterip1:redis_master_port:1 sen_mastername1
- masterip2:redis_master_port:1 sen_mastername2
- masterip3:redis_master_port:1 sen_mastername3
EOF
#配置文件中加入代理的3組集羣的master、端口及集羣名
{% for i in cluster1 %}
sed -i "s#masterip1#{{i.masterip}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#redis_master_port#$redis_master_port#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#sen_mastername1#${{i.sen_mastername}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
{% endfor %}
{% for i in cluster2 %}
sed -i "s#masterip2#{{i.masterip}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#sen_mastername2#${{i.sen_mastername}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
{% endfor %}
{% for i in cluster3 %}
sed -i "s#masterip3#{{i.masterip}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#sen_mastername3#${{i.sen_mastername}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
{% endfor %}
#change twemproxy owner
chown -R $proxy_user.$proxy_user $proxy_install_dir
#start twemproxy
su $proxy_user -c "$proxy_install_dir/twemproxy/sbin/nutcracker -c $proxy_install_dir/twemproxy/conf/nutcracker.yml -o $proxy_install_dir/twemproxy/logs/twemproxy.log -v 11 -d"
#mv client-reconfig.sh
mv $proxy_install_dir/redis/client-reconfig.sh $proxy_install_dir/twemproxy/client-reconfig.sh
#配置sentinel的client-reconfig-script
#config sentinel
$proxy_install_dir/redis/bin/redis-cli -h $ip -p $sen_port sentinel set ccbredis1 client-reconfig-script $proxy_install_dir/twemproxy/client-reconfig.sh
$proxy_install_dir/redis/bin/redis-cli -h $ip -p $sen_port sentinel set ccbredis2 client-reconfig-script $proxy_install_dir/twemproxy/client-reconfig.sh
$proxy_install_dir/redis/bin/redis-cli -h $ip -p $sen_port sentinel set ccbredis3 client-reconfig-script $proxy_install_dir/twemproxy/client-reconfig.sh
最後的sentinel的client-reconfig-script也可以在sentinel.conf中配置,如:
sentinel client-reconfig-script redis1 $proxy_install_dir/twemproxy/client-reconfig.sh
sentinel client-reconfig-script redis2 $proxy_install_dir/twemproxy/client-reconfig.sh
sentinel client-reconfig-script redis3 $proxy_install_dir/twemproxy/client-reconfig.sh
6.安裝
#檢查語法
ansible-playbook -C redis_cluster.yml
#安裝
ansible-playbook redis_cluster.yml
安裝完成後,整個集羣也就隨之啓動了,我們可以在各個節點上檢查配置,在此我們就不贅述了。
7.安裝排錯
ansible-playbook在安裝過程中可能會報錯中斷,如果遇到這種問題,我們可以在相應的節點上的的安裝目錄查看相應的腳本,來幫助我們排查,如:
cd /home/ap/app/src/
vim redis_install.sh
此時的redis_install.sh就是當前節點使用的安裝腳本。
總結
此次自動化部署實現了3組redis主從和twemproxy的組合,我們可以通過變量的形式根據不同的環境要求進行安裝,降低了我們手動部署的出錯率。另外我們可以通過此次部署延伸出一主兩從+haproxy+sentinel等各個組合方式。更重要的是通過此次部署,我們能夠將其運用到其他環境的自動化部署的場景中。
最後是redis和sentinel的模板配置文件:
(1)redis.conf
bind 127.0.0.1
protected-mode yes
port
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile
loglevel warning
logfile
databases 16
#save 900 1
#save 300 10
#save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
maxclients 4096
maxmemory memsize
maxmemory-policy allkeys-lru
(2)sentinel.conf
bind 0.0.0.0
daemonize yes
port sen_port
loglevel notice
pidfile
dir
logfile
sentinel monitor master1_name master1_host master_port quorum
sentinel down-after-milliseconds master1_name 6000
sentinel failover-timeout master1_name 18000
sentinel parallel-syncs master1_name 1
sentinel monitor master2_name master2_host master_port quorum
sentinel down-after-milliseconds master2_name 6000
sentinel failover-timeout master2_name 18000
sentinel parallel-syncs master2_name 1
sentinel monitor master3_name master3_host master_port quorum
sentinel down-after-milliseconds master3_name 6000
sentinel failover-timeout master3_name 18000
sentinel parallel-syncs master3_name 1