corosync+openais+pacemaker+drbd+web

corosync 和openais 各自都能實現羣集功能,但是功能比較簡單,要想實現功能齊全、複雜的羣集,需要將兩者結合起來。二者主要提供心跳探測,但是沒有資源管理能力。

pacemaker 可以提供資源管理能力,是從heartbeat的v3版本中分離出來的一個項目

高可用羣集要求:

硬件一致性

軟件(系統)一致性

時間一致性

名稱互相能夠解析

案例一:corosync+openais+pacemaker+web

1.按照拓撲圖分別配置兩個節點的參數

節點一:

ip :192.168.2.10/24

修改主機名

# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node1.a.com

#hostname node1.a.com

使兩個節點可以相互解析

# vim /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.2.10 node1.a.com node1

192.168.2.20 node2.a.com node2

節點二:

ip :192.168.2.20/24

修改主機名

# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2.a.com

#hostname node2.a.com

使兩個節點可以相互解析

# vim /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.2.10 node1.a.com node1

192.168.2.20 node2.a.com node2

2.在節點一(node1)上配置yum工具,並創建掛載點,掛載光盤

# vim /etc/yum.repos.d/rhel-debuginfo.repo

[rhel-server]

name=Red Hat Enterprise Linux serverbaseurl=file:///mnt/cdrom/Server

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[rhel-cluster]

name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

掛載光盤

# mkdir /mnt/cdrom

# mount /dev/cdrom /mnt/cdrom/

3.在節點2上創建掛載點,掛載光盤

# mkdir /mnt/cdrom

# mount /dev/cdrom /mnt/cdrom/

4.使兩個節點的時鐘相同,在兩個節點上執行以下命令

# hwclock -s

5.利用公鑰使兩個節點間實現無障礙通信

node1產生自己的密鑰對:

# ssh-keygen -t rsa 產生rsa密鑰對

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): 密鑰保存位置

Created directory '/root/.ssh'.

Enter passphrase (empty for no passphrase): 輸入私鑰保護密碼

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa. 私鑰位置

Your public key has been saved in /root/.ssh/id_rsa.pub. 公鑰位置

The key fingerprint is:

be:35:46:8f:72:a8:88:1e:62:44:c0:a1:c2:0d:07:da [email protected]

node2產生密鑰對:

# ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Created directory '/root/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

5e:4a:1e:db:69:21:4c:79:fa:59:08:83:61:6d:2e:4c [email protected]

6.切換至/root/.ssh下,可以看到公鑰和私鑰文件

# ll ~/.ssh/

-rw------- 1 root root 1675 10-20 10:37 id_rsa

-rw-r--r-- 1 root root 398 10-20 10:37 id_rsa.pub

7.將兩個節點的公鑰文件拷貝到對方,此過程需要對方的登錄密碼

# ssh-copy-id -i id_rsa.pub node2.a.com

# ssh-copy-id -i /root/.ssh/id_rsa.pub node1.a.com

8.將node1的yum配置文件複製到node2,很順利,不用輸入密碼

# scp /etc/yum.repos.d/rhel-debuginfo.repo node2.a.com:/etc/yum.repos.d/

rhel-debuginfo.repo 100% 317 0.3KB/s 00:00

9.此時在節點一上直接就可以查看節點二的ip參數

# ssh node2.a.com 'ifconfig'

10.上傳用到的軟件包到節點1和節點2,並分別安裝

cluster-glue-1.0.6-1.6.el5.i386.rpm

cluster-glue-libs-1.0.6-1.6.el5.i386.rpm

corosync-1.2.7-1.1.el5.i386.rpm

corosynclib-1.2.7-1.1.el5.i386.rpm

heartbeat-3.0.3-2.3.el5.i386.rpm

heartbeat-libs-3.0.3-2.3.el5.i386.rpm

libesmtp-1.0.4-5.el5.i386.rpm

openais-1.1.3-1.6.el5.i386.rpm

openaislib-1.1.3-1.6.el5.i386.rpm

pacemaker-1.1.5-1.1.el5.i386.rpm

pacemaker-cts-1.1.5-1.1.el5.i386.rpm

pacemaker-libs-1.1.5-1.1.el5.i386.rpm

perl-TimeDate-1.16-5.el5.noarch.rpm

resource-agents-1.0.4-1.1.el5.i386.rpm

# yum localinstall *.rpm -y --nogpgcheck 安裝

11.在節點1上,進入corosync的主目錄,將樣例文件變爲配置文件

# cd /etc/corosync/

#ll

-rw-r--r-- 1 root root 5384 2010-07-28 amf.conf.example openais的配置文件

-rw-r--r-- 1 root root 436 2010-07-28 corosync.conf.example corosync的配置文件

drwxr-xr-x 2 root root 4096 2010-07-28 service.d

drwxr-xr-x 2 root root 4096 2010-07-28 uidgid.d

# cp corosync.conf.example corosync.conf 生成主配置文件

12.編輯corosync.conf

#vim corosync.conf

compatibility: whitetank 向後兼容

totem { 心跳探測

version: 2 版本號

secauth: off 心跳探測時是否驗證

threads: 0 爲心跳探測啓動的線程數量,0表示無限制

interface {

ringnumber: 0

bindnetaddr: 192.168.2.10 心跳探測的網卡ip地址

mcastaddr: 226.94.1.1 組播地址

mcastport: 5405 組播端口號

}

}

logging { 日誌選項設置

fileline: off

to_stderr: no 是否將日誌輸出到標準輸出設備(屏幕)上

to_logfile: yes 將日誌記錄到日誌文件中

to_syslog: yes 將日誌作爲系統日誌進行記錄

logfile: /var/log/cluster/corosync.log 日誌文件路徑,該路徑要手動創建

debug: off

timestamp: on 爲日誌打上時間戳

logger_subsys {

subsys: AMF

debug: off

}

}

amf { openais的選項

mode: disabled

}

處理以上外,還要在該文件內添加一些語句:

service {

ver: 0

name: pacemaker 使用pacemaker

}

aisexec { 使用openais的選項

user: root

group: root

}

13.在節點2上做上步類似的修改,只需要將totem { bindnetaddr: 192.168.2.10 }改爲 192.168.2.20,其它的和節點1一樣

直接將node1的/etc/corosync/corosync.conf文件複製大node2

# scp /etc/corosync/corosync.conf node2.a.com:/etc/corosync/

在修改node2的文件

14.在兩個節點上創建目錄/var/log/cluster,用來存放corosync的日誌

# mkdir /var/log/cluster

15.在其中一個節點上,進入/etc/corosync/目錄,然後產生驗證文件authkey

# corosync-keygen

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Press keys on your keyboard to generate entropy (bits = 936).

Press keys on your keyboard to generate entropy (bits = 1000).

Writing corosync key to /etc/corosync/authkey.

16.將驗證文件複製到另一個節點,保證兩個節點的驗證文件相同

# scp -p /etc/corosync/authkey node2.a.com:/etc/corosync/

17.啓動節點1的corosync服務

# service corosync start

Starting Corosync Cluster Engine (corosync): [確定]

在節點1上啓動節點2的corosync服務

# ssh node2.a.com 'service corosync start'

Starting Corosync Cluster Engine (corosync): [確定]

18.下面進行排錯檢測

在兩個節點上執行以下命令:

檢測啓動是否正常

# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Oct 20 14:01:58 localhost corosync[2069]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

Oct 20 14:01:58 localhost corosync[2069]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

檢測心跳是否正常

# grep -i totem /var/log/messages

Oct 20 14:01:58 localhost corosync[2069]: [TOTEM ] The network interface [192.168.2.10] is now up.

檢測其他的錯誤

# grep -i error: /var/log/messages ,節點1有很多關於stonith的錯誤,節點2無錯誤

Oct 20 14:03:02 localhost pengine: [2079]: ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined

Oct 20 14:03:02 localhost pengine: [2079]: ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option

Oct 20 14:03:02 localhost pengine: [2079]: ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity

Oct 20 14:04:37 localhost pengine: [2079]: ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined

Oct 20 14:04:37 localhost pengine: [2079]: ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option

Oct 20 14:04:37 localhost pengine: [2079]: ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity

檢測pacemaker是否啓動

# grep -i pcmk_startup /var/log/messages

Oct 20 14:01:59 localhost corosync[2069]: [pcmk ] info: pcmk_startup: CRM: Initialized

Oct 20 14:01:59 localhost corosync[2069]: [pcmk ] Logging: Initialized pcmk_startup

18.查看羣集的狀態

# crm status

============

Last updated: Sat Oct 20 14:24:26 2012

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

0 Resources configured.

============

Online: [ node1.a.com node2.a.com ] 顯示兩個節點都爲在線狀態

19.在節點1上禁用stonith功能

# crm

crm(live)# configure

crm(live)configure# property stonith-enabled=false

crm(live)configure# commit

crm(live)configure# show

node node1.a.com

node node2.a.com

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false"

20.在節點1上定義資源

資源類型有4總:

primitive 本地主資源(同一時間只能在一個節點上使用)

group 組資源,將資源加入一個組,使組內的資源同時至現在一臺節點上(例如ip地址 和 服務)

clone 需要同時在多個節點上同時啓用的資源(如ocfs 、stonith,沒有主次之分)

master 有主次之分的資源,如drbd

ra類型:

crm(live)ra# classes

heartbeat

lsb

ocf / heartbeat pacemaker ocf的提供者有兩個:heartbeat和pacemaker

stonith

資源:

每個ra提供的總類不同,“list ra類型”可查看該ra支持的總類

格式如下:

資源類型 資源名字 ra類型:【提供者】:資源 參數

crm(live)configure# primitive webip ocf:heartbeat:IPaddr params ip=192.168.2.100

crm(live)configure# commit 提交

21.此時在節點1上查看羣集狀態

# crm

crm(live)# status

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [ node1.a.com node2.a.com ]

webip (ocf::heartbeat:IPaddr): Started node1.a.com webip資源在節點1上】

此時查看ip地址:

[root@node1 ~]# ifconfig

eth0:0 inet addr:192.168.2.100 虛擬ip地址在節點1上

節點二上:

crm(live)# status

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [ node1.a.com node2.a.com ]

webip (ocf::heartbeat:IPaddr): Started node1.a.com

22.定義服務。在兩個節點上安裝httpd服務,確保httpd的服務是停止狀態,並且開機不能自啓動

# yum install httpd -y

由於httpd服務同一時刻只能運行在一臺節點上,所以資源類型爲primitive

crm(live)configure# primitive webserver lsb:httpd

crm(live)configure# show

node node1.a.com

node node2.a.com

primitive webip ocf:heartbeat:IPaddr \

params ip="192.168.2.100" ip資源

primitive webserver lsb:httpd httpd資源

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false"

提交:

crm(live)configure# commit

23.此時查看羣集狀態,發現webip在節點1上,httpd在節點2上

crm(live)# status

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

2 Resources configured. 定義了兩個資源

Online: [ node1.a.com node2.a.com ]

webip (ocf::heartbeat:IPaddr): Started node1.a.com webip在節點1

webserver (lsb:httpd): Started node2.a.com httpd在節點2

24.這個時候,node1 上將有虛擬ip地址,而node2上將啓動httpd服務。可以創建一個組資源類型,將webip 和webserver 都加入該組中,同一組內的資源將會分配給同一個節點

group 組名 資源名1 資源名2

crm(live)configure# group web webip webserver

crm(live)configure# commit 提交

crm(live)configure# show

node node1.a.com

node node2.a.com

primitive webip ocf:heartbeat:IPaddr \

params ip="192.168.2.100"

primitive webserver lsb:httpd

group web webip webserver

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false"

25.再次查看羣集狀態,兩個資源都在節點1上

crm(live)# status

Last updated: Sat Oct 20 16:39:37 2012

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [ node1.a.com node2.a.com ]

Resource Group: web

webip (ocf::heartbeat:IPaddr): Started node1.a.com

webserver (lsb:httpd): Started node1.a.com

26.此時ip地址和httpd服務都在節點1上

[root@node1 ~]# service httpd status

httpd (pid 2800) 正在運行...

[root@node1 ~]# ifconfig eth0:0

eth0:0 Link encap:Ethernet HWaddr 00:0C:29:37:3F:E6

inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:67 Base address:0x2024

27.在兩個節點上分別創建網頁

node1:

# echo "node1" > /var/www/html/index.html

直接在node1 上爲node2創建網頁

# ssh node2.a.com 'echo "node2" > /var/www/html/index.html'

28.在瀏覽器中輸入 http://192.168.2.100訪問網頁

clip_image001

29.可以訪問到node1的網頁,這時可以模仿node1節點失效的情況

[root@node1 ~]# service corosync stop

Signaling Corosync Cluster Engine (corosync) to terminate: [確定]

Waiting for corosync services to unload:........ [確定

再次訪問該ip地址,發現無法放到網頁

clip_image002

30.此時在節點2上查看羣集狀態,沒有顯示webip 和 webserver 運行在哪個節點上

[root@node2 ~]# crm

crm(live)# status

Last updated: Sat Oct 20 16:55:16 2012

Stack: openais

Current DC: node2.a.com - partition WITHOUT quorum 顯示node2爲票數統計者,但是沒有票數

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [ node2.a.com ]

OFFLINE: [ node1.a.com ]

31.此時可以關閉quorum,在此選擇ignore

當票數不足一半時,可選的參數有:

ignore 忽略

freeze 凍結,已經啓用的資源繼續使用,沒有啓用的資源不能使用

stop 默認選項

suicide 殺死所有資源

32.再次啓動node1 的corosync 服務,改變quorum

# service corosync start

crm(live)configure# property no-quorum-policy=ignore

crm(live)configure# commit

33.再次關閉node1的corosync服務,在node2 上查看狀態

# service corosync stop 關閉node1的服務

Signaling Corosync Cluster Engine (corosync) to terminate: [確定]

Waiting for corosync services to unload:....... [確定]

node2 上的羣集狀態:

[root@node2 ~]# crm status

Stack: openais

Current DC: node2.a.com - partition WITHOUT quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [ node2.a.com ]

OFFLINE: [ node1.a.com ]

Resource Group: web

webip (ocf::heartbeat:IPaddr): Started node2.a.com

webserver (lsb:httpd): Started node2.a.com

34.此時訪問192.168.2.100,將會看到節點2 的網頁

clip_image003

35.此時若再次啓用節點1的corosync服務

[root@node1 ~]# service corosync start

將會發現,節點1不會進行資源奪取,直到節點2 失效

[root@node1 ~]# crm status

Last updated: Sat Oct 20 17:17:24 2012

Stack: openais

Current DC: node2.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [ node1.a.com node2.a.com ]

Resource Group: web

webip (ocf::heartbeat:IPaddr): Started node2.a.com

webserver (lsb:httpd): Started node2.a.com

DRBD配置

36.爲兩個節點的磁盤進行分區,要求兩個節點上的分區大小要一模一樣。

以下操作在兩臺節點上都進行

# fdisk /dev/sda

Command (m for help): p 顯示當前的分區信息

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 1288 10241437+ 83 Linux

/dev/sda3 1289 1415 1020127+ 82 Linux swap / Solaris

Command (m for help): n 增加一個分區

Command action

e extended

p primary partition (1-4)

e 增加和一個擴展分區

Selected partition 4

First cylinder (1416-2610, default 1416): 起始柱面

Using default value 1416

Last cylinder or +size or +sizeM or +sizeK (1416-2610, default 2610): 結束柱面

Using default value 2610

Command (m for help): n 增加一個分區(此時默認爲邏輯分區)

First cylinder (1416-2610, default 1416): 起始柱面

Using default value 1416

Last cylinder or +size or +sizeM or +sizeK (1416-2610, default 2610): +1G 大小爲1G

Command (m for help): p 再次顯示分區信息

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 1288 10241437+ 83 Linux

/dev/sda3 1289 1415 1020127+ 82 Linux swap / Solaris

/dev/sda4 1416 2610 9598837+ 5 Extended

/dev/sda5 1416 1538 987966 83 Linux

Command (m for help): w 保存分區結果並退出

The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: 設備或資源忙.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

37.使內核重新讀取分區表(兩個節點上做同樣的操作)

# partprobe /dev/sda

# cat /proc/partitions

major minor #blocks name

8 0 20971520 sda

8 1 104391 sda1

8 2 10241437 sda2

8 3 1020127 sda3

8 4 0 sda4

8 5 987966 sda5

38.上傳GRBD主程序和內核模塊程序,由於當前內核模塊爲2.6.18 ,在2.6.33的內核中才開始集成DRBD的內核代碼,但是可以使用模塊方式將DRBD的載入內核。安裝這兩個軟件

drbd83-8.3.8-1.el5.centos.i386.rpm GRBD主程序

kmod-drbd83-8.3.8-1.el5.centos.i686.rpm 內核模塊

# yum localinstall drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm -y --nogpgcheck

39.在兩個節點上分別執行以下命令

#modprobe drbd 加載內核模塊

# lsmod |grep drbd 顯示是否加載成功

40.在兩個節點上編輯grbd的配置文件 :/etc/grbd.conf

#

# You can find an example in /usr/share/doc/drbd.../drbd.conf.example

include "drbd.d/global_common.conf"; 包含全局通用配置文件

include "drbd.d/*.res"; 包含資源文件

# please have a a look at the example configuration file in

# /usr/share/doc/drbd83/drbd.conf

41. 在兩個節點上編輯global_common.conf文件,編輯之前最好做備份

# cd /etc/drbd.d/

# cp -p global_common.conf global_common.conf.bak

#vim global_common.conf

global {

usage-count no; 不統計用法計數(影響性能)

# minor-count dialog-refresh disable-ip-verification

}

common {

protocol C; 使用C類協議當存儲到對方的磁盤後纔算結束

handlers {

# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";

# split-brain "/usr/lib/drbd/notify-split-brain.sh root";

# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";

# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";

# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;

}

startup { 啓動時延遲配置

wfc-timeout 120;

degr-wfc-timeout 120;

}

disk {

on-io-error detach; 當io出錯時拆除磁盤

fencing resource-only;

}

net {

cram-hmac-alg "sha1";通訊時使用sha1加密

shared-secret "abc"; 預共享密鑰,雙方應相同

}

syncer {

rate 100M; 同步時的速率

}

}

42.在兩個節點上分別編輯資源文件,文件名可隨便寫,但是不能有空格

#/etc/drbd.d/ web.res

resource web { 資源名

on node1.a.com { node1.a.com的資源

device /dev/drbd0; 邏輯設備名,在/dev/下

disk /dev/sda5; 真實設備名,節點間共享的磁盤或分區

address 192.168.2.10:7789; 節點1的ip地址

meta-disk internal; 磁盤類型

}

on node2.a.com { node2.a.com的資源

device /dev/drbd0;

disk /dev/sda5;

address 192.168.2.20:7789;

meta-disk internal;

}

43.在兩個節點上初始化資源web

# drbdadm create-md web 創建多設備web

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

44.在兩個節點上啓動drbd服務

# service drbd start

Starting DRBD resources: [

web

Found valid meta data in the expected location, 1011671040 bytes into /dev/sda5.

d(web) n(web) ]...

45.查看當前哪臺設備室激活設備

# cat /proc/drbd

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:987896

當前設備的角色/對方的角色 ,可知當前兩臺設備都未激活,都無權限讀取磁盤

或是使用命令drbd-overview 查看當前設備狀態

drbd-overview

0:web Connected Secondary/Secondary Inconsistent/Inconsistent C r----

46.在節點1上執行命令,將當前設備成爲主設備

# drbdadm -- --overwrite-data-of-peer primary web

# drbd-overview 查看當前激活設備,顯示該設備爲主設備,已經同步3.4%

0:web SyncSource Primary/Secondary UpToDate/Inconsistent C r----

[>....................] sync'ed: 3.4% (960376/987896)K delay_probe: 87263

節點2 上的情況:

# drbd-overview

0:web SyncTarget Secondary/Primary Inconsistent/UpToDate C r----

[=>..................] sync'ed: 10.0% (630552/692984)K queue_delay: 0.0 ms

47.在節點1上格式化主設備的磁盤

# mkfs -t ext3 -L drbdweb /dev/drbd0

48.在節點1上新建掛載點,將/dev/drbd0掛載到上面

# mkdir /mnt/web

# mount /dev/drbd0 /mnt/web

49.將node1變爲備份設備,node2 變爲主設備,在node1上執行命令

# drbdadm secondary web

0: State change failed: (-12) Device is held open by someone 提示資源正在被某個用戶使用

Command 'drbdsetup 0 secondary' terminated with exit code 11

可以先卸載,然後再執行

# umount /mnt/web/

# drbdadm secondary web

50.查看當前設備node1的狀態,顯示:兩個節點都爲備份節點

# drbd-overview

0:web Connected Secondary/Secondary UpToDate/UpToDate C r----

51.在節點2上,將當前設備設置爲主設備

# drbdadm primary web

# drbd-overview 當前設備成爲主設備

0:web Connected Primary/Secondary UpToDate/UpToDate C r----

52.在節點2上格式化/dev/drbd0

# mkfs -t ext3 -L drbdweb /dev/drbd0

53.節點2上創建掛載點,將/dev/drbd0 掛載上

# mkdir /mnt/web

# mount /dev/drbd0 /mnt/web 若節點2 不是主節點,將不能掛載

54.在節點1上指定默認粘性值

crm(live)configure# rsc_defaults resource-stickiness=100

crm(live)configure# commit

55.在節點1上定義資源

crm(live)configure# primitive webdrbd ocf:heartbeat:drbd params drbd_resource=web op monitor role=Master interval=50s timeout=30s op monitor role=Slave interval=60s timeout=30s

56.創建master類型的資源,將webdrbd 加入

crm(live)configure# master MS_Webdrbd webdrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

57.爲Primary節點上的web資源創建自動掛載的集羣服務

crm(live)configure# primitive WebFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mnt/web" fstype="ext3"

58.

crm(live)configure# colocation WebFS_on_MS_webdrbd inf: WebFS MS_Webdrbd:Master

crm(live)configure# order WebFS_after_MS_Webdrbd inf: MS_Webdrbd:promote WebFS:start

crm(live)configure# verify

crm(live)configure# commit

59.將節點1 設置爲主節點:drbdadm primary web,然後掛載/dev/drbd0到/mnt/web。切換至/mnt/web,創建目錄html

60.編輯node1

# vim /etc/httpd/conf/httpd.conf

DocumentRoot "/mnt/web/html"

# echo "<h1>Node1.a.org</h1>" > /mnt/debd/html/index.html

# crm configure primitive WebSite lsb:httpd //添加httpd爲資源

# crm configure colocation website-with-ip INFINITY: WebSite WebIP //是IP和web服務在同一主機上

# crm configure order httpd-after-ip mandatory: WebIP WebSite //定義資源啓動順序

clip_image004

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章