紅帽HA(RHCS)ricci+luci+fence

RHEL6-紅帽HARHCSricci+luci+fence

1.整體架構:

223056713.png

223058793.png

223100594.png

2實驗環境:

luci管理機:192.168.122.1

ricci節點:192.168.122.34

192.168.122.33

192.168.122.82

yum倉庫:

[rhel-source]

name=RedHat Enterprise Linux $releasever - $basearch - Source

baseurl=ftp://192.168.122.1/pub/rhel6.3

gpgcheck=0


[HighAvailability]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/HighAvailability

gpgcheck=0


[LoadBalancer]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/LoadBalancer

gpgcheck=0


[ResilientStorage]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/ResilientStorage

gpgcheck=0


[ScalableFileSystem]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/ScalableFileSystem

gpgcheck=0


要將紅色部分加入yum'倉庫;


說明:

不支持在集羣節點中使用NetworkManager。如果您已經在集羣節點中安裝了NetworkManager,您應該 刪除或者禁用該程序。


2.環境配置:

以下的操作在所有的HA節點上進行:

在所有的ha節點上安裝ricci 在客戶端(要有web瀏覽器)上安裝luci

# yum-y install ricci

[root@desk82~]# chkconfig ricci on

[root@desk82~]# /etc/init.d/ricci start

[root@desk82~]# passwd ricci #一定要有否則將認證失敗:

啓動luci

[root@wangzirhel6cluster]# /etc/init.d/luci start

Pointyour web browser to https://localhost.localdomain:8084(or equivalent) to access luci

網頁訪問:

https://localhost.localdomain:8084

並使用root登錄

223205799.png



創建集羣:

223207426.png

此時luci管理端正在爲ricciHA節點上自動安裝所需要的包:

223209403.png

HA節點上可以看見有yum的進程正在運行:

223211527.png

完成後:


223534995.png

3.fence設備配置:

採用虛擬機fence設備:虛擬機與主機名的對應關係:

hostnameipkvmdomain name

desk34192.168.122.34ha1

desk33192.168.122.33ha2

desk82192.168.122.82desk82


luci主機上:

[root@wangzidocs]# yum -y install fence-virt fence-virtd fence-virtd-libvirtfence-virtd-multicast

[root@wangzidocs]# fence_virtd -c

Modulesearch path [/usr/lib64/fence-virt]:

Availablebackends:

libvirt 0.1

Availablelisteners:

multicast 1.1


Listenermodules are responsible for accepting requests

fromfencing clients.


Listenermodule [multicast]:


Themulticast listener module is designed for use environments

wherethe guests and hosts may communicate over a network using

multicast.


Themulticast address is the address that a client will use to

sendfencing requests to fence_virtd.

MulticastIP Address [225.0.0.12]:


Usingipv4 as family.


MulticastIP Port [1229]:


Settinga preferred interface causes fence_virtd to listen only

on thatinterface. Normally, it listens on the default network

interface. In environments where the virtual machines are

usingthe host machine as a gateway, this *must* be set

(typicallyto virbr0).

Set to'none' for no interface.


Interface[none]: virbr0 #根據主機網卡配置而定或是br0

如果虛擬機與真機之間使用NAT選擇virbr0,使用的橋接選擇br0


The keyfile is the shared key information which is used to

authenticatefencing requests. The contents of this file must

bedistributed to each physical host and virtual machine within

acluster.


KeyFile [/etc/cluster/fence_xvm.key]:


Backendmodules are responsible for routing requests to

theappropriate hypervisor or management layer.


Backendmodule [checkpoint]: libvirt

Thelibvirt backend module is designed for single desktops or

servers. Do not use in environments where virtual machines

may bemigrated between hosts.


LibvirtURI [qemu:///system]:


Configurationcomplete.


===Begin Configuration ===

fence_virtd{

listener= "multicast";

backend= "libvirt";

module_path= "/usr/lib64/fence-virt";

}

listeners{

multicast{

key_file= "/etc/cluster/fence_xvm.key";

address= "225.0.0.12";

family= "ipv4";

port= "1229";

interface= "virbr0";

}


}

backends{

libvirt{

uri ="qemu:///system";

}


}



=== EndConfiguration ===

Replace/etc/fence_virt.conf with the above [y/N]? y

luci主機上的cluster配置文件位置:/etc/fence_virt.conf


[root@wangzidocs]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128count=1


[root@wangzicluster]# scp /etc/cluster/fence_xvm.key desk33:/etc/cluster/

[root@wangzicluster]# scp /etc/cluster/fence_xvm.key desk34:/etc/cluster/

[root@wangzicluster]# scp /etc/cluster/fence_xvm.key desk82:/etc/cluster/


[root@wangzicluster]# /etc/init.d/fence_virtd start

[root@wangzicluster]# netstat -anplu | grep fence

udp 0 0 0.0.0.0:1229 0.0.0.0:* 10601/fence_virtd

[root@wangzicluster]# fence_xvm -H vm4 -o reboot #檢查fence是否可以控制虛擬機如果可以,對應的虛擬機將會重啓;

添加fence設備:

223619500.png

在網頁上所做的一切操作都會寫在各個節點的/etc/cluster/cluster.conf中:

[root@desk82cluster]# cat cluster.conf

<?xmlversion="1.0"?>

<clusterconfig_version="2" name="wangzi_1">

<clusternodes>

<clusternodename="192.168.122.34" nodeid="1"/>

<clusternodename="192.168.122.33" nodeid="2"/>

<clusternodename="192.168.122.82" nodeid="3"/>

</clusternodes>

<fencedevices>

<fencedeviceagent="fence_xvm" name="vmfence"/>

</fencedevices>

</cluster>

在個Node上:

223621332.png

223623768.png

添加錯誤域:

223625658.png

“Priority”爲優先級,越小優先級越高:

NoFailback 爲服務不回切(默認爲回切)


添加資源:

223627780.png

223630637.png

ip爲一個虛擬的浮動ip,用於外界訪問。將會浮動出現在後端提供服務的HA節點上;

最後一行的數字越小,浮動ip切換的速度越快;

httpd服務必須是自己在HA節點上提前安裝,但不要啓動


添加服務組:

223632590.png

在服務組apsche下添加資源剛纔添加的資源ipadress httpd

223958527.png

可看見集羣已經自動將192.168.122.34httpd啓動了。

[root@desk34~]# /etc/init.d/httpd status

httpd(pid 14453) is running...


測試:

[root@desk34~]# clustat

ClusterStatus for wangzi_1 @ Sat Sep 7 02:52:18 2013

MemberStatus: Quorate

MemberName

ID Status

---------- ---- ------

192.168.122.34 1 Online, Local,rgmanager

192.168.122.33 2 Online, rgmanager

192.168.122.82 3 Online, rgmanager


Service Name Owner (Last) State

------- ---- ----- ------ -----

service:apsche 192.168.122.34 started

1.)關閉httpd服務

[root@desk34~]# /etc/init.d/httpd stop


224000141.png

224002551.png

desk33上將會出現192.168.122.122的浮動ip

[root@desk33~]# ip addr show

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 52:54:00:d0:fe:21 brd ff:ff:ff:ff:ff:ff

inet 192.168.122.33/24 brd 192.168.122.255 scope global eth0

inet 192.168.122.122/24 scope global secondaryeth0

如果再將desk33上的httpd關掉,將會切換到desk82上;

如果將desk34httpd啓動,浮動將會回到desk34上,因爲desk34的優先級最高,並且設置了服務回切

2)斷網模擬:

[root@desk34~]# ifconfig eth0 down

desk34會重啓服務將會切換到desk33上;

desk34重啓完畢後,服務會回切到desk34上。

3)內核崩潰:

[root@desk34~]# echo c > /proc/sysrq-trigger

224004122.png

224006919.png

主機重啓,服務切換到desk33


西安石油大學

王茲銀

[email protected]


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章