corosync+pacemaker+mysql+drbd 實現mysql的高可用

corosync+pacemaker+mysql+drbd 實現mysql的高可用

====================================

一、瞭解各組件是什麼以及之間的關係

二、安裝高可用集羣的提前

三、安裝corosync+pacemaker

四、編譯安裝mysql

五、安裝drbd

六、mysql與drbd實現mysql數據的鏡像

七、利用crmsh配置mysql的高可用

====================================

環境:

OS:Centos 6.x(redhat 6.x)

kernel:2.6.32-358.el6.x86_64

yum源:

[centos]
name=sohu-centos
baseurl=http://mirrors.sohu.com/centos/$releasever/os/$basearch
gpgcheck=1
enable=0
gpgkey=http://mirrors.sohu.com/centos/RPM-GPG-KEY-CentOS-6
[epel]
name=sohu-epel
baseurl=http://mirrors.sohu.com/fedora-epel/$releasever/$basearch/
enable=1
gpgcheck=0


拓撲圖:

205713233.png

部分軟件以附件的形式上傳,mysql的源碼包軟件,網上很容易找


望各位博友帶着下面的疑問去實現corosync+pacemaker+mysql+drbd mysql的高可用?

1、corosync是什麼?pacemaker是什麼?corosync與pacemaker的關係?

2、 mysql與drbd之間的連接關係?

3、 corosync、pacemaker、mysql、drbd之間的關係?

4、提到高可用集羣,廣大博友們就會想到,高可用集羣中資源之間是如何建立關係?各個節點會不會搶佔資源使其出現腦裂(split-brain)?

出現腦裂可以有fence設備自行解決


一、瞭解各組件是什麼以及之間的關係

corosynccorosync的由來是源於一個Openais的項目,是Openais的一個子 項目,可以實現HA心跳信息傳輸的功能,是衆多實現HA集羣軟件中之一,heartbeat與corosync是流行的Messaging Layer (集羣信息層)工具。而corosync是一個新興的軟件,相比Heartbeat這款很老很成熟的軟件,corosync與Heartbeat各有優勢,博主就不在這裏比較之間的優勢了,corosync相對於Heartbeat只能說現在比較流行。

pacemaker:是衆多集羣資源管理器(Cluster Resource Manager)CRM中的一個,其主要功能是管理來着集羣信息層發送來的信息。Pacemaker是集羣的核心,它管理集羣和集羣信息. 集羣信息更新通過Corosync通知到各個節點.

常見的CRM有:

heartbeat v1-->haresources

hearbeat v2--->crm

hearbeat v3---->pacemaker

RHCS(cman)----->rgmanager

如下圖corosync與pacemaker之間的關係:

161511846.png


mysql:一個開源的關係型數據庫

drbd:DRBD:(distributed replication block device)即分佈式複製塊設備。它的工作原理是:在A主機上有對指定磁盤設備寫請求時,數據發送給A主機的kernel,然後通kernel中的一個模塊,把相同的數據傳送給B主機的kernel中一份,然後B主機再寫入自己指定的磁盤設備,從而實現兩主機數據的同步,也就實現了寫操作高可用。類似於raid1一樣,實現數據的鏡像,DRBD一般是一主一從,並且所有的讀寫操作,掛載只能在主節點服務器上進行,,但是主從DRBD服務器之間是可以進行調換的。

各個組件之間的關係:其實mysql與drbd根本沒有半毛錢的關係,而drbd與mysql相結合就有很重要的作用了,因爲drbd實現數據的鏡像,當drbd的主節點掛了之後,drbd的輔助節點還可以提供服務,但是主節點不會主動的切換到輔助的節點上面去,於是乎,高可用集羣就派上用場了,因爲資源定義爲高可用的資源,主節點出現故障之後,高可用集羣可以自動的切換到輔助節點上去,實現故障轉移繼續提供服務。

二、安裝高可用集羣的提前準備

1)、hosts文件

     #把主機名改成jie2.com
[root@jie2 ~]# sed -i s/`grep HOSTNAME /etc/sysconfig/network |awk -F '=' '{print $2}'`/jie2.com/g /etc/sysconfig/network
     #把主機名改成jie3.com
[root@jie3 ~]# sed -i s/`grep HOSTNAME /etc/sysconfig/network |awk -F '=' '{print $2}'`/jie3.com/g /etc/sysconfig/network
[root@jie2 ~]# cat >>/etc/hosts << EOF
>172.16.22.2 jie2.com  jie2
>172.16.22.3 jie3.com  jie3
>EOF
[root@jie3 ~]# cat >>/etc/hosts << EOF
>172.16.22.2 jie2.com  jie2
>172.16.22.3 jie3.com  jie3
>EOF

2)、ssh互信

[root@jie2 ~]# ssh-keygen -t rsa -P ''
[root@jie2 ~]# ssh-copy-id -i .ssh/id_rsa.pub jie3
[root@jie3 ~]# ssh-keygen -t rsa -P ''
[root@jie3 ~]# ssh-copy-id -i .ssh/id_rsa.pub jie2

3)、關閉NetworkManger

[root@jie2 ~]# chkconfig --del NetworkManager
[root@jie2 ~]# chkconfig NetworkManager off
[root@jie2 ~]# service NetworkManager stop
[root@jie3 ~]# chkconfig --del NetworkManager
[root@jie3 ~]# chkconfig NetworkManager off
[root@jie3 ~]# service NetworkManager stop


4)、時間同步(博主用的是自己的ntp時間服務器)

[root@jie2 ~]#ntpdate 172.16.0.1
[root@jie3 ~]#ntpdate 172.16.0.1


三、安裝corosync+pacemaker

1)、安裝corosync+pacemaker

    #節點jie2.com的操作
[root@jie2 ~]# yum -y install corosync pacemaker
[root@jie2 ~]# yum -y --nogpgcheck install crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm   #提供crmsh命令接口的軟件
    #節點jie3.com的操作
[root@jie3 ~]# yum -y install corosync pacemaker
[root@jie3 ~]# yum -y --nogpgcheck install crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm

2)、修改配置文件和生成認證文件

配置文件

      #節點jie2.com的操作
[root@jie2 ~]# cd /etc/corosync/
[root@jie2 corosync]# mv corosync.conf.example corosync.conf
[root@jie2 corosync]# vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {                           #心跳信息傳遞層
    version: 2                    #版本
    secauth: on                   #認證信息  一般on
    threads: 0                    #線程
    interface {                   #定義心跳信息傳遞的接口
        ringnumber: 0
        bindnetaddr: 172.16.0.0   #綁定的網絡地址,寫網絡地址
        mcastaddr: 226.94.1.1     #多播地址
        mcastport: 5405           #多播的端口
        ttl: 1                    #生存週期
    }
}
logging {                         #日誌
    fileline: off
    to_stderr: no                 #是否輸出在屏幕上
    to_logfile: yes               #定義自己的日誌
    to_syslog: no                 #是否由syslog記錄日誌
    logfile: /var/log/cluster/corosync.log  #日誌文件的存放路徑
    debug: off
    timestamp: on                 #時間戳是否關閉
    logger_subsys {
        subsys: AMF
        debug: off
    }
}
amf {
    mode: disabled
}
service {
     ver: 0
     name: pacemaker   #pacemaker作爲corosync的插件進行工作
}
aisexec {
     user: root
     group: root
}
[root@jie2 corosync]#  scp corosync.conf jie3:/etc/corosync/
##把節點jie2.com的配置文件copy到jie3.com中

認證文件

     #節點jie2.com的操作
[root@jie2 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy (bits = 152).
     #遇到這個情況,表示電腦的隨機數不夠,各位朋友可以不停的隨便敲鍵盤,或者安裝軟件也可以生成隨機數
[root@jie2 corosync]# scp authkey jie3:/etc/corosync/
     #把認證文件也複製到jie3.com主機上

3)、開啓服務和查看集羣中的節點信息

         #節點jie2.com的操作
[root@jie2 ~]# service corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
[root@jie2 ~]# crm status
Last updated: Thu Aug  8 14:43:13 2013
Last change: Sun Sep  1 16:41:18 2013 via crm_attribute on jie3.com
Stack: classic openais (with plugin)
Current DC: jie3.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
Online: [ jie2.com jie3.com ]
         #節點jie3.com的操作
[root@jie3 ~]# service corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
[root@jie3 ~]# crm status
Last updated: Thu Aug  8 14:43:13 2013
Last change: Sun Sep  1 16:41:18 2013 via crm_attribute on jie3.com
Stack: classic openais (with plugin)
Current DC: jie3.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
Online: [ jie2.com jie3.com ]


四、編譯安裝mysql(兩個節點的操作過程都是一樣)


     #節點jie2.com的操作
#1)、解壓編譯安裝
[root@jie2 ~]# tar xf mysql-5.5.33.tar.gz
[root@jie2 ~]# yum -y groupinstall "Development tools" "Server Platform Development"
[root@jie2 ~]# cd mysql-5.5.33
[root@jie2 mysql-5.5.33]# yum -y install cmake
[root@jie2 mysql-5.5.33]# cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
-DMYSQL_DATADIR=/mydata/data  -DSYSCONFDIR=/etc \
-DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_ARCHIVE_STORAGE_ENGINE=1 \
-DWITH_BLACKHOLE_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DWITH_SSL=system \
-DWITH_ZLIB=system -DWITH_LIBWRAP=0 -DMYSQL_UNIX_ADDR=/tmp/mysql.sock \
-DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci
[root@jie2 mysql-5.5.33]# make && make install
#2)、建立配置文件和腳本文件
[root@jie2 mysql-5.5.33]# cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf
[root@jie2 mysql-5.5.33]# cp /usr/local/mysql/support-files/mysql.server  /etc/rc.d/init.d/mysqld
[root@jie2 mysql-5.5.33]# cd /usr/local/mysql/
[root@jie2 mysql]# useradd -r -u 306 mysql
[root@jie2 mysql]# chown -R root:mysql ./*
#3)、關聯繫統識別的路徑
[root@jie2 mysql]#echo "PATH=/usr/local/mysql/bin:$PATH" >/etc/profile.d/mysqld.sh
[root@jie2 mysql]#source /etc/profile.d/mysqld.sh
[root@jie2 mysql]#echo "/usr/local/mysql/lib" >/etc/ld.so.conf.d/mysqld.conf
[root@jie2 mysql]#ldconfig -v | grep mysql
[root@jie2 mysql]#ln -sv /usr/local/mysql/include/ /usr/local/mysqld


先別初始化數據庫,安裝drbd把drbd掛載到目錄下,然後初始化數據庫把數據庫的數據存放到drbd掛載的目錄。

五、安裝drbd

安裝rpm包的drbd軟件必須保證找相同內核版本的drbd-kmdl軟件

1)、先劃分一個分區,此分區做成drbd鏡像(RHEL 6.x的重新格式化一個新的分區之後要重啓系統)

       #節點jie2.com的操作
[root@jie2 ~]# fdisk /dev/sda
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, +cylinders or +size{K,M,G} (7859-15665, default 15665): +5G
Command (m for help): w
         #節點jie3.com的操作
[root@jie3 ~]# fdisk /dev/sda
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, +cylinders or +size{K,M,G} (7859-15665, default 15665): +5G
Command (m for help): w


2)、安裝drbd和修改配置文件

#1)、安裝drbd
     #節點jie2.com的操作
[root@jie2 ~]# rpm -ivh drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
warning: drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY
Preparing...                ################################# [100%]
[root@jie2 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm
warning: drbd-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY
Preparing...                ################################## [100%]
     #節點jie3.com的操作
[root@jie3 ~]# rpm -ivh drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
warning: drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY
Preparing...                ################################# [100%]
[root@jie3 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm
warning: drbd-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY
Preparing...                ################################## [100%]
#2)、修改drbd的配置文件
      #節點jie2.com的操作
[root@jie2 ~]# cd /etc/drbd.d/
[root@jie2 drbd.d]# cat global_common.conf  #全局配置文件
global {
        usage-count no;
        # minor-count dialog-refresh disable-ip-verification
}
common {
        protocol C;
        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }
        startup {
                #wfc-timeout 120;
                #degr-wfc-timeout 120;
        }
        disk {
                on-io-error detach;
                #fencing resource-only;
        }
        net {
                cram-hmac-alg "sha1";
                shared-secret "mydrbdlab";
        }
        syncer {
                rate 1000M;
        }
}
[root@jie2 drbd.d]# cat mydata.res #資源配置文件
resource mydata {
  on jie2.com {
    device    /dev/drbd0;
    disk      /dev/sda3;
    address   172.16.22.2:7789;
    meta-disk internal;
  }
  on jie3.com {
    device    /dev/drbd0;
    disk      /dev/sda3;
    address   172.16.22.3:7789;
    meta-disk internal;
  }
}
    #把配置文件copy到節點jie3.com上面
[root@jie2 drbd.d]# scp global_common.conf mydata.res jie3:/etc/drbd.d/


3)、初始化drbd的資源並啓動

     #節點jie2.com的操作
#創建drbd的資源
[root@jie2 ~]# drbdadm create-md mydata
Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.  #提示已經創建成功
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
#啓動服務
[root@jie2 ~]# service drbd start
Starting DRBD resources: [
     create res: mydata
   prepare disk: mydata
    adjust disk: mydata
     adjust net: mydata
]
..........                [ok]
     #節點jie3.com的操作
#創建drbd的資源
[root@jie3 ~]# drbdadm create-md mydata
Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.  #提示已經創建成功
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
#啓動服務
[root@jie3 ~]# service drbd start
Starting DRBD resources: [
     create res: mydata
   prepare disk: mydata
    adjust disk: mydata
     adjust net: mydata
]
..........                [ok]

4)、設置一個主節點,然後同步drbd的數據(此步驟只需在一個節點上操作)

   #設置jie2.com爲drbd的主節點
[root@jie2 ~]# drbdadm primary --force mydata
[root@jie2 ~]# cat /proc/drbd     #查看同步進度
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-
    ns:1897624 nr:0 dw:0 dr:1901216 al:0 bm:115 lo:0 pe:3 ua:3 ap:0 ep:1 wo:f oos:207988
    [=================>..] sync'ed: 90.3% (207988/2103412)K
    finish: 0:00:07 speed: 26,792 (27,076) K/sec
[root@jie2 ~]# watch -n1 'cat /proc/drbd'  此命令可以動態的查看
[root@jie2 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:120 nr:354 dw:435 dr:5805 al:6 bm:9 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0    #當兩邊都爲UpToDate時,表示兩邊已經同步

5)、格式化drdb分區(此步驟在主節點上操作)

[root@jie2 ~]# mke2fs -t ext4 /dev/drbd0

六、mysql與drbd實現mysql數據的鏡像

1)、在drbd的主節點上,掛載drbd的分區,然後初始化數據庫

[root@jie2 ~]# mkdir /mydata  #創建用於掛載drbd的目錄
[root@jie2 ~]# mount /dev/drbd0 /mydata/
[root@jie2 ~]# mkdir /mydata/data
[root@jie2 ~]#chown -R mysql.mysql /mydata  #把文件的屬主和屬組改成mysql
[root@jie2 ~]#vim /etc/my.cnf   #修改mysql的配置文件
        datadir = /mydata/data
        innodb_file_per_table =1
[root@jie2 ~]#/usr/local/mysql/scripts/mysql_install_db --user=mysql --datadir=/mydata/data/  --basedir=/usr/local/mysql   #初始化數據庫
[root@jie2 ~]# service mysqld start
Starting MySQL .......                                [  OK  ]


2)、驗證drbd是否鏡像

      #節點jie2.com的操作
#1)、先在drbd的主節點上面創建一個數據庫
[root@jie2 ~]# mysql
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)
mysql> create database jie2;
Query OK, 1 row affected (0.01 sec)
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| jie2               |
| mysql              |
| performance_schema |
| test               |
+--------------------+
5 rows in set (0.00 sec)
mysql>\q
#2)、停掉mysql服務,卸載drbd掛載的目錄
[root@jie2 ~]# service mysqld stop
[root@jie2 ~]# umount /dev/drbd0 #卸載drbd的掛載點
[root@jie2 ~]# drbdadm  secondary mydata  #把此節點改爲drbd的備用節點
     #節點jie3.com的操作
#3)、把jie3.com變爲drbd的主節點
[root@jie3 ~]#drbdadm  primary mydata   #把此節點改爲drbd的主節點
[root@jie3 ~]# mkdir /mydata
[root@jie3 ~]#chown -R mysql.mysql /mydata
[root@jie3 ~]#mount /dev/drdb0 /mydata
[root@jie3 ~]#vim /etc/my.cnf
      datadir = /mydata/data
      innodb_file_per_table = 1
[root@jie3 ~]# service mysqld start   #此節點上不用初始化數據庫,直接開啓服務即可
Starting MySQL .......                                [  OK  ]
[root@jie3 ~]# mysql
mysql> show databases;    #可以看見jie2數據庫
+--------------------+
| Database           |
+--------------------+
| information_schema |
| jie2               |
| mysql              |
| performance_schema |
| test               |
+--------------------+
5 rows in set (0.00 sec)


七、利用crmsh配置mysql的高可用

需要定義集羣資源而mysql、drbd都是集羣的資源,由集羣管理的資源開機是一定不能夠自行啓動的。

1)、關閉drbd的服務和關閉mysql的服務

[root@jie2 ~]#service mysqld stop
[root@jie2 ~]#service drbd stop
[root@jie3 ~]#service mysqld stop
[root@jie3 ~]#umount /dev/drbd0  #之前drbd已經掛載到jie3.com節點上了
[root@jie3 ~]#service drdb stop

2)、定義集羣資源

定義drbd的資源(提供drbd的資源代理RA由OCF類別中的linbit提供)

[root@jie2 ~]# crm
crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydata op monitor role=Master interval=10 timeout=20  op monitor  role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100
crm(live)configure#verify   #可以檢查語法

定義drbd的主從資源

crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# verify

定義文件系統資源和約束關係

crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op monitor interval=40 timeout=40 op start timeout=60 op stop timeout=60
crm(live)configure# verify
crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
crm(live)configure# order ms_mysqldrbd_before_mystore mandatory: ms_mysqldrbd:promote mystore:start
crm(live)configure# verify


定義vip資源、mysql服務的資源約束關係

crm(live)configure# primitive myvip ocf:heartbeat:IPaddr params ip="172.16.22.100" op monitor interval=20 timeout=20 on-fail=restart
crm(live)configure# primitive myserver lsb:mysqld op monitor interval=20 timeout=20 on-fail=restart
crm(live)configure# verify
crm(live)configure# colocation myserver_with_mystore inf: myserver mystore
crm(live)configure# order mystore_before_myserver mandatory: mystore:start myserver:start
crm(live)configure# verify
crm(live)configure# colocation myvip_with_myserver inf: myvip myserver
crm(live)configure# order myvip_before_myserver mandatory: myvip myserver
crm(live)configure# verify
crm(live)configure# commit


查看所有定義資源的信息

crm(live)configure# show
node jie2.com \
        attributes standby="off"
node jie3.com \
        attributes standby="off"
primitive myserver lsb:mysqld \
        op monitor interval="20" timeout="20" on-fail="restart"
primitive mysqldrbd ocf:linbit:drbd \
        params drbd_resource="mydata" \
        op monitor role="Master" interval="10" timeout="20" \
        op monitor role="Slave" interval="20" timeout="20" \
        op start timeout="240" interval="0" \
        op stop timeout="100" interval="0"
primitive mystore ocf:heartbeat:Filesystem \
        params device="/dev/drbd0" directory="/mydata" fstype="ext4" \
        op monitor interval="40" timeout="40" \
        op start timeout="60" interval="0" \
        op stop timeout="60" interval="0"
primitive myvip ocf:heartbeat:IPaddr \
        params ip="172.16.22.100" \
        op monitor interval="20" timeout="20" on-fail="restart" \
        meta target-role="Started"
ms ms_mysqldrbd mysqldrbd \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation myserver_with_mystore inf: myserver mystore
colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
colocation myvip_with_myserver inf: myvip myserver
order ms_mysqldrbd_before_mystore inf: ms_mysqldrbd:promote mystore:start
order mystore_before_myserver inf: mystore:start myserver:start
order myvip_before_myserver inf: myvip myserver
property $id="cib-bootstrap-options" \
        dc-version="1.1.8-7.el6-394e906" \
        cluster-infrastructure="classic openais (with plugin)" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"

查看資源運行的狀態運行在jie3.com上

[root@jie2 ~]# crm status
Last updated: Thu Aug  8 17:55:30 2013
Last change: Sun Sep  1 16:41:18 2013 via crm_attribute on jie3.com
Stack: classic openais (with plugin)
Current DC: jie3.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ jie2.com jie3.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ jie3.com ]
     Slaves: [ jie2.com ]
 mystore    (ocf::heartbeat:Filesystem):    Started jie3.com
 myvip  (ocf::heartbeat:IPaddr):    Started jie3.com
 myserver   (lsb:mysqld):   Started jie3.com


切換節點,看資源是否轉移

[root@jie3 ~]# crm node standby jie3.com   #把此節點設置爲備用節點
[root@jie3 ~]# crm status
Last updated: Mon Sep  2 01:45:07 2013
Last change: Mon Sep  2 01:44:59 2013 via crm_attribute on jie3.com
Stack: classic openais (with plugin)
Current DC: jie3.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Node jie3.com: standby
Online: [ jie2.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ jie2.com ]   #資源已然轉到jie2.com上面
     Stopped: [ mysqldrbd:1 ]
 mystore    (ocf::heartbeat:Filesystem):    Started jie2.com
 myvip  (ocf::heartbeat:IPaddr):    Started jie2.com
 myserver   (lsb:mysqld):   Started jie2.com

由於定義了drbd的資源約束,Masters運行在那個節點,則此節點不可能成爲drbd的輔助節點

[root@jie3 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:426 nr:354 dw:741 dr:6528 al:8 bm:9 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[root@jie3 ~]# drbdadm secondary mydata
0: State change failed: (-12) Device is held open by someone
Command 'drbdsetup secondary 0' terminated with exit code 11


手動的停掉myvip資源還是會啓動(因爲定義資源是指定了on-fail=restart)

[root@jie2 ~]# ifconfig | grep eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:1F:74:CF
          inet addr:172.16.22.2  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe1f:74cf/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2165062 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4109895 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:167895762 (160.1 MiB)  TX bytes:5731508707 (5.3 GiB)
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:1F:74:CF
          inet addr:172.16.22.100  Bcast:172.16.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
[root@jie2 ~]# ifconfig  eth0:0 down
[root@jie2 ~]# ifconfig | grep eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:1F:74:CF
          inet addr:172.16.22.2  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe1f:74cf/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2165242 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4110094 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:167917669 (160.1 MiB)  TX bytes:5731537035 (5.3 GiB)
[root@jie2 ~]# crm status
Last updated: Thu Aug  8 18:29:27 2013
Last change: Mon Sep  2 01:44:59 2013 via crm_attribute on jie3.com
Stack: classic openais (with plugin)
Current DC: jie3.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Node jie3.com: standby
Online: [ jie2.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ jie2.com ]
     Stopped: [ mysqldrbd:1 ]
 mystore    (ocf::heartbeat:Filesystem):    Started jie2.com
 myvip  (ocf::heartbeat:IPaddr):    Started jie2.com
 myserver   (lsb:mysqld):   Started jie2.com
Failed actions:
    myvip_monitor_20000 (node=jie2.com, call=47, rc=7, status=complete): not running
    myserver_monitor_20000 (node=jie3.com, call=209, rc=7, status=complete): not running
[root@jie2 ~]# ifconfig | grep eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:1F:74:CF
          inet addr:172.16.22.2  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe1f:74cf/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2165681 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4110535 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:168015864 (160.2 MiB)  TX bytes:5731617112 (5.3 GiB)
eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:1F:74:CF
          inet addr:172.16.22.100  Bcast:172.16.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
[root@jie2 ~]#


自此mysql的高可用已經完成。本博客沒對crm命令參數進行解釋

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章