集羣之LVS的高可用

系統平臺 redhat5.8

IP配置信息:

LVS-DR-master HA1: 172.16.66.6

LVS-DR-backup HA2: 172.16.66.7

LVS-DR-vip: 172.16.66.1

LVS-DR-rs1:172.16.66.4   

LVS-DR-rs2:172.16.66.5

軟件包下載參考地址

http://www.linuxvirtualserver.org/software/kernel-2.6/

http://www.keepalived.org/software/


每臺機器配置前的準備工作

關閉selinux

# getenforce  查看selinux狀態,若爲enforcing則執行以下步驟修改

# setenforce 0   

# vim /etc/sysconfig/selinux  (服務器重啓後纔會永久生效)

修改SELINUX=enforcing爲SELINUX=disabled


一、RS的配置過程

1、RS1的配置

1)首先配置本機IP:(網卡要改爲橋接方式)

setup -->Network configuration --> Edit Devices --> eth0(eth0) – Advanced Micro Devices [AMD] --> 修改IP爲 172.16.66.4

或者vim /etc/sysconfig/network-scripts/ifcfg-eth0 修改IP)

# service network restart 重啓服務(每次修改配置後都不要忘了重啓服務)

2)編輯lvs.sh開機啓動腳本並添加執行權限開啓

# vim /etc/init.d/lvs.sh 

#!/bin/bash

#

# Script to start LVS DR real server.

# chkconfig: - 90 10

# description: LVS DR real server

#

. /etc/rc.d/init.d/functions

VIP=172.16.66.1

host=`/bin/hostname`

case "$1" in

start)

# Start LVS-DR real server on this machine.

/sbin/ifconfig lo down

/sbin/ifconfig lo up

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

#(broadcast爲廣播地址,255.255.255.255意味着只跟自己在同一個網段內,全是網絡地址)

/sbin/route add -host $VIP dev lo:0

;;

stop)

# Stop LVS-DR real server loopback device(s).

/sbin/ifconfig lo:0 down

echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce

;;

status)

# Status of LVS-DR real server.

islothere=`/sbin/ifconfig lo:0 | grep $VIP`

isrothere=`netstat -rn | grep "lo:0" | grep $VIP`

if [ ! "$islothere" -o ! "isrothere" ];then

# Either the route or the lo:0 device

# not found.

echo "LVS-DR real server Stopped."

else

echo "LVS-DR real server Running."

fi

;;

*)

# Invalid entry.

echo "$0: Usage: $0 {start|status|stop}"

exit 1

;;

esac

# chmod +x /etc/init.d/lvs.sh 添加執行權限

# cd /etc/init.d/

# ./lvs.sh start  啓動服務

3)安裝httpd服務,提供頁面並開啓服務

# yum install httpd -y

# echo "RS1.magedu.com">/var/www/html/index/html

# service httpd start

clip_image002

4)測試環境是否配置成功

在物理主機上ping 172.16.66.1看看是否能ping通

clip_image003

Ping通後 可執行 arp -a 命令查看哪一個IP響應了

ifconfig 驗證(虛擬IP爲172.16.66.1)

clip_image004

2、RS2的配置(與RS1相同)

1)配置IP:(網卡要改爲橋接方式)

IP: 172.16.66.5

# vim /etc/sysconfig/network-scripts/ifcfg-eth0 設置IP

# service network restart 重啓服務

2)編輯開機啓動腳本添加執行權限後開啓服務

# vim /etc/init.d/lvs.sh 腳本內容和RS1的相同

# cd /etc/init.d/

# chmod +x lvs.sh 添加執行權限

# ./lvs.sh start 啓動服務

3)安裝httpd服務提供相應的網頁頁面並開啓服務

# yum install httpd –y

# echo "RS2.magedu.com">/var/www/html/index.html

# service httpd start

clip_image005

4) 驗證環境是否成功

在物理主機上ping172.16.66.1 查看是否能ping通, 然後執行arp –a查看響應狀態

# ifconfig 驗證

二、配置節點HA1、HA2

讓兩個節點各自在本地提供兩個頁面,以只讀方式進行提供

HA1: IP爲 172.16.66.6

HA2: IP爲 172.16.66.7

vip: 172.16.66.1 (虛擬IP)

以兩個節點node1,node2爲例

創建節點的集羣有幾個需要注意的地方:

1)節點名稱,對於名稱的解析絕不可以依賴於DNS,應依賴於本地配置文件/etc/hosts,每一個節點的節點名稱一定要跟uname -n 的節點保持一致

2)ssh互信通信,即以不提供密碼的方式能夠通過基於密鑰通信的方式互相訪問對方節點上的用戶

3)集羣中各節點時間必須同步,這是我們所依賴的基本前提,因爲高可用集羣的節點必須時刻監控對方的心跳信息

1、修改HA1、HA2的兩臺主機的主機名

1)修改HA1主機名

clip_image006

# vim /etc/sysconfig/network 修改內容如下

clip_image007

2)修改HA2主機名

clip_image008

# vim /etc/sysconfig/network修改內容如下:

clip_image009


2、配置兩主機HA1與HA2雙機互信

1)HA1上操作過程

# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''  複製密鑰到本地

# ssh-copy-id -i .ssh/id_rsa.pub [email protected] 將公鑰文件發送到HA2上

clip_image010

# ssh 172.16.66.7  與另一臺主機HA2進行互信

clip_image011

2)HA2上操作過程

# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P  ''   

# ssh-copy-id -i .ssh/id_rsa.pub [email protected]  

# ssh 172.16.66.6 'ifconfig'   與節點HA1進行通信


3、配置主機解析和時間同步

HA1上操作

1)主機解析配置

# vim /etc/hosts 添加如下內容

172.16.66.6 node1.magedu.com node1

172.16.66.7 node2.magedu.com node2

clip_image012

# ping node2       測試一下能否ping通node2

clip_image013

# scp /etc/hosts node2:/etc/ 複製主機解析配置文件到HA2下,使其雙方保持一致

# iptables -L 確保iptables沒有被限定

clip_image014

2)時間同步配置

# date

# ntpdate 172.16.0.1 通過另外一臺服務器同步時間

clip_image015

# service ntpd stop   關閉ntpd服務器

# chkconfig ntpd off 確保ntpd服務器開機時不能自動啓動

# crontab -e  爲了保證以後的時間是同步的,編輯配置文件,添加內容如下

*/5 * * * * /sbin/ntpdata 172.16.0.1 &>/dev/null

# scp /var/spool/cron/root node2:/var/spool/cron/ 複製到同步時間的配置文件到HA2

HA2上操作

# ping node1 查看是否能ping通節點1

# ping node1.magedu.com

# date

# crontab -l 查看編輯的同步時間的配置文件是否從node1上覆制過來了

clip_image016

三、利用keepalived實現LVS的高可用

1、HA1和HA2上分別安裝keepalived和ipvsadm   (ipvsadm本系統自帶的有所以直接安裝)

# yum -y --nogpgcheck localinstall keepalived-1.2.7-5.el5.i386.rpm 安裝keepalived包

# scp keepalived-1.2.7-5.el5.i386.rpm node2:/root/ 複製軟件包到node2上

# yum -y install ipvsadm 裝上工具方便監測

2、我們服務的轉移情況

HA1 主節點上配置

[root@node1 ~]# cd /etc/keepalived/

[root@node1 keepalived]# ls 查看配置文件

keepalived.conf keepalived.conf.haproxy_example notify.sh

[root@node1 keepalived]# cp keepalived.conf keepalived.conf.bak 備份主配置文件

[root@node1 keepalived]# vim keepalived.conf 修改內容如下

! Configuration File for keepalived

global_defs {

notification_email {

root@localhost

}

notification_email_from keepalived@localhost

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER

interface eth0    # 虛擬接口通過哪個物理接口進行發送

virtual_router_id 79

priority 101

advert_int 1

authentication {

auth_type PASS

auth_pass keepalivedpass pass  # 表示簡單進行認證

}

virtual_ipaddress {

172.16.66.1 /16 dev eth0 lable eth0:0  # 這是我們的虛擬地址,要配置在網卡接口上的

}

virtual_server 172.16.66.1 80 {

delay_loop 6

lb_algo rr

lb_kind DR

nat_mask 255.255.0.0

# persistence_timeout 50 # 不需要持久連接的

protocol TCP

real_server 172.16.66.4 80 {

weight 1

HTTP_GET {

url {

path /

status_code 200

}

connect_timeout 2

nb_get_retry 3

delay_before_retry 1

}

}

real_server 172.16.66.5 80 {

weight 1

HTTP_GET {

url {

path /

status_code 200

}

connect_timeout 2

nb_get_retry 3

delay_before_retry 1

}

}

}

(vrrp_instance VI_1定義vrrp的虛擬路由,對於虛擬路由而言我們兩端的特色初始狀態一端爲master,一端爲backup,爲master的一端要比backup的一端大點,當我們的服務遇到故障時要進行監測,並降低優先級,降低的優先級要比我們的備節點要小,減去降低的優先級後要比backup定義的優先級要小。)

3、把配置文件複製到另一個節點HA2一份

[root@node1 keepalived]# scp keepalived.conf node2:/etc/keepalived/

HA2上配置

[root@node1 keepalived]# vim keepalived.conf 修改如下內容(只修改兩處)

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 79

priority 100 要比master的小

advert_int 1

authentication {

auth_type PASS

auth_pass keepalivedpass

}

4、分別在兩個節點上啓動服務

[root@node1 keepalived]# service keepalived start

Starting keepalived: [ OK ]

[root@node2 keepalived]# service keepalived start

Starting keepalived: [ OK ]

5、查看IP和ipvsadm規則並在物理主機上訪問

clip_image017

查看ipvsadm規則

clip_image018

在物理主機上訪問172.16.66.1

clip_image019

刷新網頁

clip_image020

再查看ipvsadm規則

clip_image021

四、利用keepalived實現web服務的高可用

此過程的實現只需兩臺虛擬機HA1、HA2

1、HA1的配置

[root@node1 ~]# service keepalived stop

[root@node1 ~]# yum -y install httpd

[root@node1 ~]# vim /var/www/html/index.html

clip_image022

[root@node1 keepalived]# service httpd start

Starting httpd: [ OK ]

在物理主機上瀏覽器訪問172.16.66.6則會出現node1內容 (或者在本系統curl http://172.16.66.7直接查看)


2、HA2的配置

[root@node1 ~]# service keepalived stop  停止keepalived服務

[root@node2 ~]# yum -y install httpd    安裝httpd

[root@node2 keepalived]# vim /var/www/html/index.html

clip_image024

[root@node2 keepalived]# service httpd start

Starting httpd: [ OK ]

在物理主機瀏覽器上訪問172.16.66.7  (或者在本系統機curl http://172.16.66.7)

clip_image025

3、編輯節點1的keepalived的配置文件並提供相應的腳本後啓動服務

[root@node1 ~]# cd /etc/keepalived/

[root@node1 keepalived]# cp keepalived.conf.haproxy_example keepalived.conf

cp: overwrite `keepalived.conf'? yes

分別修改兩節點的配置文件並重啓服務

HA1上配置

1)修改keepalived配置

[root@node1 keepalived]# vim keepalived.conf 腳本內容如下

! Configuration File for keepalived

global_defs {

notification_email {

[email protected]

[email protected]

}

notification_email_from [email protected]

smtp_connect_timeout 3

smtp_server 127.0.0.1

router_id LVS_DEVEL

}

vrrp_script chk_httpd {

script "killall -0 haproxy"

interval 2

# check every 2 seconds

weight -2

# if failed, decrease 2 of the priority

fall 2

# require 2 failures for failures

rise 1

# require 1 sucesses for ok

}

vrrp_script chk_schedown {

script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"

interval 2

weight -2

}

vrrp_instance VI_1 {

interface eth0

# interface for inside_network, bound by vrrp

state MASTER

# Initial state, MASTER|BACKUP

# As soon as the other machine(s) come up,

# an election will be held and the machine

# with the highest "priority" will become MASTER.

# So the entry here doesn't matter a whole lot.

priority 101

# for electing MASTER, highest priority wins.

# to be MASTER, make 50 more than other machines.

virtual_router_id 51

# arbitary unique number 0..255

# used to differentiate multiple instances of vrrpd

# running on the same NIC (and hence same socket).

garp_master_delay 1

authentication {

auth_type PASS

auth_pass password

}

track_interface {

eth0

}

# optional, monitor these as well.

# go to FAULT state if any of these go down.

virtual_ipaddress {

172.16.66.1/16 dev eth0 label eth0:0

}

#addresses add|del on change to MASTER, to BACKUP.

#With the same entries on other machines,

#the opposite transition will be occuring.

#<IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>

track_script {

chk_haproxy

chk_schedown

}

notify_master "/etc/keepalived/notify.sh master"

notify_backup "/etc/keepalived/notify.sh backup"

notify_fault "/etc/keepalived/notify.sh fault"

}

#vrrp_instance VI_2 {

# interface eth0

# state MASTER # BACKUP for slave routers

# priority 101 # 100 for BACKUP

# virtual_router_id 79

# garp_master_delay 1

#

# authentication {

# auth_type PASS

# auth_pass password

# }

# track_interface {

# eth0

# }

# virtual_ipaddress {

# 172.16.66.2/16 dev eth0 label eth0:1

# }

# track_script {

# chk_haproxy

# chk_mantaince_down

# }

#

# notify_master "/etc/keepalived/notify.sh master eth0:1"

# notify_backup "/etc/keepalived/notify.sh backup eth0:1"

# notify_fault "/etc/keepalived/notify.sh fault eth0:1"

#}

2)複製配置文件內容到節點2上

# scp keepalived.conf notify.sh node2:/etc/keepalived/

clip_image027

[root@node1 keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

HA2的配置

[root@node1 keepalived]# vim keepalived.conf 修改如下內容

vrrp_instance VI_1 {

interface eth0

# interface for inside_network, bound by vrrp

state BACKUP

# Initial state, MASTER|BACKUP

# As soon as the other machine(s) come up,

# an election will be held and the machine

# with the highest "priority" will become MASTER.

# So the entry here doesn't matter a whole lot.

priority 100

# for electing MASTER, highest priority wins.

# to be MASTER, make 50 more than other machines.

[root@node2 keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

4、模擬master出現故障

首先停掉主服務器的web服務,然後查看IP是否漂移到了從服務器上

[root@node1 keepalived]# service httpd stop

Stopping httpd: [ OK ]

主服務器上IP顯示爲

clip_image029

從服務器上IP顯示爲

clip_image031

測試:在物理主機上訪問:172.16.66.1

clip_image033

五、利用keepalived實現web服務的雙主模型

雙主模型的實現是在主從模型的基礎上做的

1、編輯兩個節點的配置文件

HA1:

[root@node1 keepalived]# vim keepalived.conf 修改配置文件如下

vrrp_instance VI_2 {

interface eth0

state BACKUP # BACKUP for slave routers

priority 100 # 100 for BACKUP

virtual_router_id 79

garp_master_delay 1

authentication {

auth_type PASS

auth_pass password

}

track_interface {

eth0

}

virtual_ipaddress {

172.16.66.2/16 dev eth0 label eth0:1

}

track_script {

chk_haproxy

chk_mantaince_down

}

notify_master "/etc/keepalived/notify.sh master eth0:1"

notify_backup "/etc/keepalived/notify.sh backup eth0:1"

notify_fault "/etc/keepalived/notify.sh fault eth0:1"

}

HA2:[root@node2 keepalived]# vim keepalived.conf

vrrp_instance VI_2 {

interface eth0

state MASTER # BACKUP for slave routers

priority 101 # 100 for BACKUP

virtual_router_id 103

garp_master_delay 1

authentication {

auth_type PASS

auth_pass password

}

track_interface {

eth0

}

virtual_ipaddress {

172.16.66.2/16 dev eth0 label eth0:1

}

track_script {

chk_httpd

chk_schedown

}

notify_master "/etc/keepalived/notify.sh master eth0:1"

notify_backup "/etc/keepalived/notify.sh backup eth0:1"

notify_fault "/etc/keepalived/notify.sh fault eth0:1"

}

2、分別重啓啓動兩節點的keepalived服務

[root@node1 keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

[root@node2 keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

3、驗證,首先查看兩節點的IP,然後在物理主機上訪問

查看節點1的顯示爲

clip_image035

查看節點2的IP顯示爲

clip_image037

在物理主機上訪問172.16.66.1

clip_image039

在物理主機上訪問172.16.66.2

clip_image041

4、模擬節點1 宕掉

[root@node1 keepalived]# touch down

查看節點1的IP ifconfig(節點轉移走)

clip_image043

查看節點2的IP

clip_image045

在物理主機上訪問驗證

訪問172.16.66.1和172.16.66.2都是由node2主機返回結果

訪問 172.16.66.1

clip_image047

訪問 172.16.66.2

clip_image049

[root@node1 keepalived]# rm –rf down 刪除down掉的節點

刪除down文件後查看節點IP是否奪回了資源

clip_image051

clip_image053

這就利用keepalived實現的一些高可用的部分功能,希望對您有所幫助。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章