corosync+pacemaker實現高可用集羣mysql服務共享存儲
說明:VIP地址爲:172.16.22.1
一共有三臺服務器:
MySQL1 node1.lihuan.com 172.16.22.10 node1
MySQL2 node2.lihuan.com 172.16.22.11 node2
NFS Server:172.16.22.2 nfs
如圖:
備註:配置成高可用集羣,服務是絕對不能開機啓動,也不能啓動此服務的,而是由高可用集羣控制服務的啓動與關閉的
一、創建邏輯卷以及安裝mysql
創建邏輯卷用來做mysql數據共享存儲的,並掛載邏輯卷,開機自動掛載
1.在nfs上:
# vim /etc/sysconfig/network-scripts/ifcfg-eth0 #更改nfs的IP地址
更改IPADDR、NETMASK爲:
IPADDR=172.16.22.2
NETMASK=255.255.0.0
# service network restart
# setenforce 0
- # fdisk /dev/sda
- p
- n
- e
- n
- +20G
- p
- t
- 5
- 8e
- w
- # partprobe /dev/sda
- # pvcreate /dev/sda5
- # vgcreate myvg /dev/sda5
- # lvcreate -L 10G -n mydata myvg
- # mke2fs -j -L MYDATA /dev/myvg/mydata
- # mkdir /mydata
- # vim /etc/fstab #在最後加上下面一行
- LABEL=MYDATA /mydata ext3 defaults 0 0
- # mount -a
- # mount
- # groupadd -g 306 -r mysql
- # useradd -g mysql -r -u 306 -s /sbin/nologin mysql
- # id mysql
- # chown -R mysql:mysql /mydata/
- # vim /etc/exports #寫上下面一行
- /mydata 172.16.0.0/16(rw,no_root_squash)
- # service nfs start
- # rpcinfo -p localhost
- # chkconfig nfs on
- # showmount -e 172.16.22.2
2.準備mysql服務
在node1上:
- # groupadd -g 306 -r mysql
- # useradd -g mysql -r -u 306 mysql
- # mkdir /mydata
- # mount -t nfs 172.16.22.2:/mydata /mydata/
- # ls /mydata/
- # cd /mydata
- # touch 1.txt
- # tar xvf mysql-5.5.22-linux2.6-i686.tar.gz -C /usr/local
- # cd /usr/local/
- # ln -sv mysql-5.5.22-linux2.6-i686 mysql
- # cd mysql
- # chown -R mysql:mysql .
- # scripts/mysql_install_db --user=mysql --datadir=/mydata/data #初始化mysql
- # chown -R root .
爲mysql提供主配置文件:
# cd /usr/local/mysql
# cp support-files/my-large.cnf /etc/my.cnf
# vim /etc/my.cnf
修改thread_concurrency = 8爲:
thread_concurrency = 2
並增加如下一行:
datadir = /mydata/data
爲mysql提供sysv服務腳本:
# cd /usr/local/mysql
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
備註:提供此腳本的好處就是可以使用諸如service mysqld start的命令了。
- # chkconfig --add mysqld
- # service mysqld start
- # /usr/local/mysql/bin/mysql
- # service mysqld stop
- # chkconfig mysqld off
- # chkconfig --list mysqld
在node2上:(由於node1上已經初始化過mysql,並且通過掛在NFS的/mydata作爲mysql數據庫存儲路徑了)
- # groupadd -g 306 -r mysql
- # useradd -g mysql -r -u 306 mysql
- # mkdir /mydata
- # mount -t nfs 172.16.22.2:/mydata /mydata/
- # tar xvf mysql-5.5.22-linux2.6-i686.tar.gz -C /usr/local
- # cd /usr/local/
- # ln -sv mysql-5.5.22-linux2.6-i686 mysql 創建鏈接
- # cd mysql
- # chown -R root:mysql .
- # cd /usr/local/mysql
# cp support-files/my-large.cnf /etc/my.cnf 爲mysql提供主配置文件:
修改thread_concurrency = 8爲:
thread_concurrency = 2
並增加如下一行:
datadir = /mydata/data
爲mysql提供sysv服務腳本:
# cd /usr/local/mysql
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
備註:提供此腳本的好處就是可以使用諸如service mysqld start的命令了。
- # chkconfig --add mysqld
- # chkconfig mysqld off
- # chkconfig --list mysqld
- # service mysqld start
- # /usr/local/mysql/bin/mysql
mysql> create database ad; 在node1查看是否也有數據庫?
# service mysqld stop
二、安裝集羣軟件
1.前提條件:
(1).在node1上:
更改系統時間(保證node1和node2的時間都正確,並保持二者時間一致)
# date
# hwclock -s
# hostname node1.lihuan.com
# vim /etc/sysconfig/network #把主機名改爲node1.lihuan.com
HOSTNAME=node1.lihuan.com
# vim /etc/sysconfig/network-scripts/ifcfg-eth0 #更改網卡地址爲
172.16.22.10,子網掩碼爲255.255.0.0
IPADDR=172.16.22.10
NETMASK=255.255.0.0
# service network restart
# vim /etc/hosts #增加如下兩項:
172.16.22.10 node1.lihuan.com node1
172.16.22.11 node2.lihuan.com node2
(2).在node2上:
更改系統時間(保證node1和node2的時間都正確,並保持二者時間一致)
# date
# hwclock -s
# hostname node2.lihuan.com
# vim /etc/sysconfig/network #把主機名改爲node2.lihuan.com
HOSTNAME=node2.lihuan.com
# vim /etc/sysconfig/network-scripts/ifcfg-eth0 #更改網卡地址爲
172.16.22.11,子網掩碼爲255.255.0.0
IPADDR=172.16.22.11
NETMASK=255.255.0.0
# service network restart
# vim /etc/hosts #增加如下兩項:
172.16.22.10 node1.lihuan.com node1
172.16.22.11 node2.lihuan.com node2
(3).實現雙機互信:
在node1上:
- # ssh-keygen -t rsa
- # ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
在node2上:
- # ssh-keygen -t rsa
- # ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
2.安裝所需軟件:
所需軟件如下:
cluster-glue-1.0.6-1.6.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
corosync-1.2.7-1.1.el5.i386.rpm
corosynclib-1.2.7-1.1.el5.i386.rpm
heartbeat-3.0.3-2.3.el5.i386.rpm
heartbeat-libs-3.0.3-2.3.el5.i386.rpm
libesmtp-1.0.4-5.el5.i386.rpm
pacemaker-1.1.5-1.1.el5.i386.rpm
pacemaker-cts-1.1.5-1.1.el5.i386.rpm
pacemaker-libs-1.1.5-1.1.el5.i386.rpm
perl-TimeDate-1.16-5.el5.noarch.rpm
resource-agents-1.0.4-1.1.el5.i386.rpm
軟件包放在/root下(node1和node2都需要下載)
(1).在node1上:
- # yum --nogpgcheck localinstall *.rpm -y
- # cd /etc/corosync
- # cp corosync.conf.example corosync.conf
- # vim corosync #添加如下內容:
- service {
- ver: 0
- name: pacemaker
- use_mgmtd: yes
- }
- aisexec {
- user: root
- group: root
- }
並設定此配置文件中 bindnetaddr後面的IP地址爲你的網卡所在網絡的網絡地址,我們這裏的兩個節點在172.16.0.0網絡,因此這裏將其設定爲172.16.0.0;如下:
bindnetaddr: 192.168.0.0
生成節點間通信時用到的認證密鑰文件:
# corosync-keygen
將corosync和authkey複製至node2:
- # scp -p corosync.conf authkey node2:/etc/corosync/
- # cd
- # mkdir /var/log/cluster
- # ssh node2 ‘# mkdir /var/log/cluster’
- # service corosync start
- # ssh node2 '/etc/init.d/corosync start'
- # crm_mon
配置stonish,quorum以及stickiness
- # crm
- crm(live)#configure
- crm(live)configure# property stonith-enabled=false
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# property no-quorum-policy=ignore
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# rsc_defaults resource-stickiness=100 #(粘性值只要大於0,表示更樂意留在當前節點)
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# show
- crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip='172.16.22.1'
- crm(live)configure# commit
- crm(live)configure# exit
- # ifconfig
(2).在node2上:
# umount /mydata
# yum --nogpgcheck localinstall *.rpm -y
(3).在node1上:
# umount /mydata
配置資源:mynfs(文件系統)
- # crm
- crm(live)# ra
- crm(live)ra# list ocf heartbeat
- crm(live)ra# meta ocf:heartbeat:Filesystem
- crm(live)ra# cd
- crm(live)# configure
- crm(live)configure# primitive mynfs ocf:heartbeat:Filesystem params
- device="172.16.22.2:/mydata" directory="/mydata" fstype="nfs" op
- start timeout=60s op stop timeout=60s
- crm(live)configure# commit
配置資源:mysqld(mysqld一定要跟mynfs在一起,mynfs要先於mysqld啓動)
- crm(live)configure# primitive mysqld lsb:mysqld
- crm(live)configure# show
配置綁定關係(擺列約束)
crm(live)configure# colocation mysqld_and_mynfs inf: mysqld mynfs
crm(live)configure# show
說明:inf表明mysqld mynfs永遠在一起
再定義次序(order)
- crm(live)configure# order mysqld_after_mynfs mandatory: mynfs mysqld:start
- crm(live)configure# show
- crm(live)configure# order mysqld_after_myip mandatory: myip mysqld:start
- crm(live)configure# commit
- crm(live)configure# show
- node node1.lihuan.com \
- attributes standby="on"
- node node2.lihuan.com \
- attributes standby="off"
- primitive myip ocf:heartbeat:IPaddr \
- params ip="172.16.22.1"
- primitive mynfs ocf:heartbeat:Filesystem \
- params device="172.16.22.2:/mydata" directory="/mydata" fstype="nfs" \
- op start interval="0" timeout="60s" \
- op stop interval="0" timeout="60s"
- primitive mysqld lsb:mysqld
- colocation mysqld_and_mynfs inf: mysqld mynfs myip
- order mysqld_after_myip inf: myip mysqld:start
- order mysqld_after_mynfs inf: mynfs mysqld:start
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false" \
- no-quorum-policy="ignore"
- rsc_defaults $id="rsc-options" \
- resource-stickiness="100"
- crm(live)configure# exit
- # ls /mydata/data
- # /usr/local/mysql/bin/mysql
- mysql> SHOW DATABASES;
- mysql> GRANT ALL ON *.* TO root@'%' IDENTIFIED BY '123456';
- mysql> flush privileges;
- # crm status
- # ls /mydata
- # ls /mydata/data
- # crm node standby
說明:mandatory爲強制性的
(4).在node2
# crm node online
# crm status
(5).在windows上連接:mysql -uroot -h172.16.22.1 -p123456
mysql> create database a;
mysql> show databases;
如下圖:
此時可以正常操作mysql。
讓node2故障,在node2上:
# crm node standby
# crm status
在node1上:
# crm node online
在windows上連接:mysql -uroot -h172.16.22.1 -p123456
mysql> show databases;
如下圖:
此時可以看到mysql數據庫中有database a,說明數據庫一切正常,這就是高可用集羣所實現的功能。