從頭搭建drbd+openfiler+corosync (二)


3. 開始搭建corosync

3.1 Create Corosync authkey創建雙方的認證

root@filer01~# corosync-keygen   執行之後等待他的輸出,一直到結束!
( Press the real keyboard instead of pressing keys in an ssh terminal. )
Copy the authkey file to the other node and change the fileaccess:
root@filer01~# scp /etc/corosync/authkey root@filer02:/etc/corosync/authkey
root@filer02~# chmod 400 /etc/corosync/authkey
 

3.2 創建 pcmk /etc/corosync/service.d/pcmk

root@filer01~# vi /etc/corosync/service.d/pcmk
service {
        # Load the Pacemaker Cluster Resource Manager
        name: pacemaker
        ver:  0
 }
 

3.2.1 拷貝到filer02上面

root@filer01~# scp /etc/corosync/service.d/pcmk root@filer02:/etc/corosync/service.d/pcmk
 

3.3 Create the corosync.conf file and change it to present your lan net ( bindnetaddr )

[root@filer01 ~]# cat /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.11.0  (心跳線的廣播域)
                mcastaddr: 226.94.8.8   (組播地址 選取這個段的)
                mcastport: 5405
                ttl: 1
        }
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
amf {
        mode: disabled
}
 

3.3.1 拷貝一份到filer02

root@filer01~# scp /etc/corosync/corosync.conf root@filer02:/etc/corosync/corosync.conf

4.準備corosync配置

首先,我們準備重啓機器,然後把以下的服務停掉隨機啓動,因爲我們需要用corosync來控制他們

root@filer01~# chkconfig --level 2345 openfiler off
root@filer01~# chkconfig --level 2345 nfslock off
root@filer01~# chkconfig --level 2345 corosync on
倆個節點都要執行  :
root@filer02~# chkconfig --level 2345 openfiler off
root@filer02~# chkconfig --level 2345 nfslock off
root@filer02~# chkconfig --level 2345 corosync on
然後重啓機器,等待....
 

4.1 Check if corosync started properly查看corosync是否啓動正常

root@filer01~# ps auxf
root@filer01~# ps auxf
root      3480  0.0  0.8 534456  4112 ?        Ssl  19:15   0:00 corosync
root      3486  0.0  0.5  68172  2776 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/stonith
106       3487  0.0  1.0  67684  4956 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/cib
root      3488  0.0  0.4  70828  2196 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/lrmd
106       3489  0.0  0.6  68536  3096 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/attrd
106       3490  0.0  0.6  69064  3420 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/pengine
106       3491  0.0  0.7  76764  3488 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/crmd
root@filer02~# crm_mon --one-shot -V
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
============
Last updated: Thu Mar 24 19:32:07 2011
Stack: openais
Current DC: filer01 - partition with quorum
Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b
2 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [ filer01 filer02 ]
 

4.2 Configure Corosync as following配置corosync

Now before do monitor the status of starting the cluster on filer02:
root@filer02~# crm_mon
 

4.2.1 Howto configure corosync step by step

root@filer01~# crm configure
crm(live)configure# property stonith-enabled="false"
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# rsc_defaults $id="rsc-options" resource-stickiness="100" 
crm(live)configure# primitive ClusterIP ocf:heartbeat:IPaddr2 params ip="192.168.10.248" (虛擬IP地址)cidr_netmask="24" op monitor interval="30s"
crm(live)configure# primitive MetaFS ocf:heartbeat:Filesystem  params device="/dev/drbd0" directory="/meta" fstype="ext3"
#crm(live)configure# primitive lvmdata ocf:heartbeat:LVM  params volgrpname="data"
crm(live)configure# primitive drbd_meta ocf:linbit:drbd params drbd_resource="meta"  op monitor interval="15s"
crm(live)configure# primitive drbd_data ocf:linbit:drbd  params drbd_resource="data"  op monitor interval="15s"
crm(live)configure# primitive openfiler lsb:openfiler
crm(live)configure# primitive iscsi lsb:iscsi-target
#crm(live)configure# primitive samba lsb:smb
#crm(live)configure# primitive nfs lsb:nfs
#crm(live)configure# primitive nfslock lsb:nfslock
crm(live)configure# group g_drbd drbd_meta drbd_data
crm(live)configure# group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfs-lock
crm(live)configure# ms ms_g_drbd g_drbd  meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure# colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
crm(live)configure# order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start
crm(live)configure# commit

Watch now on the monitor process how the resources all start hopefully.
root@filer01 ~# crm_mon
以上會有warn提示,但是不是報錯,沒有關係 

4.2.2 Troubleshooting

If you get any errors because you done commit before the end of the config, then you need to do a cleanup, as in this example:
root@filer01~# crm
crm(live)resource cleanup MetaFS
 

4.2.3 Verify the config

驗證你的配置信息,通過輸入
[root@filer01 ~]# crm configure show
node filer01
node filer02
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.10.248" cidr_netmask="24" \
op monitor interval="30s"
primitive MetaFS ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/meta" fstype="ext3"
primitive drbd_data ocf:linbit:drbd \
params drbd_resource="data" \
op monitor interval="15s"
primitive drbd_meta ocf:linbit:drbd \
params drbd_resource="meta" \
op monitor interval="15s"
primitive iscsi lsb:iscsi-target
primitive lvmdata ocf:heartbeat:LVM \
params volgrpname="data"
primitive nfs lsb:nfs
primitive nfslock lsb:nfslock
primitive openfiler lsb:openfiler
primitive samba lsb:smb
group g_drbd drbd_meta drbd_data
group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfslock
ms ms_g_drbd g_drbd \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start
property $id="cib-bootstrap-options" \
dc-version="1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

然後你可以通過輸入

crm_mon來查看狀態。服務是不是正常啓動

Last updated: Mon Dec 17 10:40:54 2012
Stack: openais
Current DC: filer01 - partition with quorum
Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b
4 Nodes configured, 2 expected votes
2 Resources configured.
============

Online: [ filer01 filer02 ]

 Resource Group: g_services
     MetaFS     (ocf::heartbeat:Filesystem):    Started filer01
     lvmdata    (ocf::heartbeat:LVM):              Started filer01
     openfiler  (lsb:openfiler):                          Started filer01
     ClusterIP  (ocf::heartbeat:IPaddr2):         Started filer01
     iscsi      (lsb:iscsi-target):                           Started filer01
     samba      (lsb:smb):                                   Started filer01
     nfs        (lsb:nfs):                                          Started filer01
     nfslock    (lsb:nfslock):                                Started filer01
     Master/Slave Set: ms_g_drbd
     Masters: [ filer01 ]
     Slaves: [ filer02 ]
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章