Lvs--DR模式+keepalived實現高可用

一.背景及lvs簡介

   背景:服務器需要提供大量併發訪問服務,因此對大負載的服務器來講,CPU,I/O處理能力很快會成爲瓶頸,由於單臺服務器的性能總是有限的,簡單的提高硬件性能並不能真正解決這個問題,引入多服務器和負載均衡技術(多臺服務器組成一個虛擬服務器)滿足大量併發訪問,它的特點是提供了一個負載能力易於擴展,而價格低廉的解決方案。

   組成

  1. 調度
  2. 真實服務器

   LVS原理:

DR原理:

 

  1. 首先,用戶向負載均衡器調度器(Director Server)發起請求,負載均衡器將請求發往至內核空間,交給內核模塊進行檢測。
  2. 內核模塊中的PREROUTING鏈首先會收到用戶請求,判斷目標地址是否是負載均衡器的IP地址,如果是,則將數據包發往INPUT鏈。
  3. IPVS模塊是工作在INPUT鏈上的,當用戶請求到達INPUT鏈上時,IPVS會將用戶請求和自己已定義好的集羣服務作對比,如果用戶請求的就是定義的集羣服務,那麼IPVS會強行修改數據包裏的目標IP地址和目標端口,並將新的數據包發POSTROUTING鏈。
  4. POSTROUTING鏈接收到數據包發現目標IP地址剛好是自己的後端的服務器,那麼通過選路,將數據包最終發送給後端的服務器。

二.LVS-DR的配置

實驗環境rhel6.5

實驗主機

主機ip

主機名

作用

172.25.254.1

Server1

Director Server

172.25.254.2

Server2

Real  Server

172.25.254.3

Server3

Real  Server

172.25.254.61

foundation61

Test   server

準備工作

在server2和server3上安裝http服務,編輯訪問頁面,本地解析,爲了檢測方便

1.在虛擬服務器server1上配置yum源,可以安裝更多軟件
[root@server1 ~]# vim /etc/yum.repos.d/rhel-source.repo

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.254.61/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.254.61/rhel6.5/LoadBalancer
gpgcheck=0

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.254.61/rhel6.5/HighAvailability
gpgcheck=0

2.server1上安裝軟件ipvsadm是管理集羣服務的命令行工具,用於管理添加LVS策略:

 yum install -y ipvsadm

3.查看本地解析

vim /etc/hosts
172.25.254.1  server1 www.westos.org westos.orgg bbs.westos.org www.linux.org

添加一個虛擬ip :172.25.254.100(通過ip addr 查看網卡信息)

[root@server1 ~]# ip addr add 172.25.254.100/24 dev eth0

添加一個虛擬設備,-A 添加一臺虛擬設備 -t tcp server-ip -s 採用的算法:rr算法

[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr

添加後端實際服務器,這裏添加的是server2和server3

[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.3:80 -g
[root@server1 ~]# ipvsadm -ln    查看策略是否添加成功
[root@server1 ~]# ipvsadm -lnc

4.server2 server3打開httpd,添加虛擬ip

[root@server2 html]# /etc/init.d/httpd  restart
[root@server2 html]# ip addr add 172.25.254.100/32 dev eth0
[root@server2 html]# yum install -y arptables_jf   安裝控制訪問軟件

當100這個ip訪問時,丟棄所有網內請求

[root@server2 html]# arptables -A IN -d 172.25.254.100 -j DROP

當自身內網訪問100這個ip時,僞裝爲server2的實際ip

[root@server2 html]# arptables -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.2

保存添加的策略

[root@server2 html]# /etc/init.d/arptables_jf save

server3和server2做同樣的操作

[root@server3 ~]# /etc/init.d/httpd start
[root@server3 ~]# ip addr add 172.25.254.100/32 dev eth0
[root@server3 ~]# yum install -y arptables_jf
[root@server3 ~]#  arptables -A IN -d 172.25.254.100 -j DROP
[root@server3 ~]# arptables -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.3
[root@server3 ~]# /etc/init.d/arptables_jf save

測試:
如果測試機不能到達100,而server又可以訪問是因爲server2,server3沒有添加vip(ip addr 100)

[root@foundation61 rhel6.5]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; No route to host

因爲直接訪問的是100ip沒有通過server1,則不會出現輪詢,安裝arptables_jf

[root@foundation61 ~]# curl 172.25.254.100
<h1>bbs.westos.org-server3</h1>
[root@foundation61 ~]# curl 172.25.254.100
<h1>bbs.westos.org-server3</h1>

清空緩存-d實現輪詢

三.DR下的健康檢查

  我們手動停止server2的http服務,但客戶端並不知道,仍會訪問server2,因此引入LVS的健康檢查,安裝檢測後端服務器工作狀態,如果集羣中的服務器出現錯誤,會自動檢測,訪問正確的服務器,假如服務器都有問題了就直接返回一個錯誤信息。


1.下載安裝包ldirectord-3.9.5-3.1.x86_64.rpm,完成後端安全檢查:

[root@server1 pub]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y
[root@server1 pub]# rpm -ql ldirectord
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz

2.查找這個配置文件拷貝到正確位置

[root@server1 pub]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d
[root@server1 pub]# cd /etc/ha.d

3.配置文件的修改

[root@server1 ha.d]# vim ldirectord.cf
virtual=172.25.254.100:80      #訪問的虛擬ip
        real=172.25.254.2:80 gate   #後端真實服務器ip
        real=172.25.254.3:80 gate    #後端真實服務器ip
        fallback=127.0.0.1:80 gate   # 如果後端真實服務器全部宕機,本機接管服務
        service=http
        scheduler=rr          #輪詢算法
        #persistent=600
        #netmask=255.255.255.255
        protocol=tcp
        checktype=negotiate
        checkport=80
        request="index.html"        #receive="Test Page"
        #virtualhost=www.x.y.z

4.打開健康檢查服務

[root@server1 ha.d]# /etc/init.d/ldirectord start
Starting ldirectord... success

清空之前的後端服務器的策略
測試:前幾次訪問是正確的輪詢,我手動停止server2的http服務,則健康檢查生效一直訪問server3上的,

四.高可用(HA+HB)

1.虛擬服務器安裝keepalived並解壓安裝

[root@server1 pub]# tar zxf keepalived-2.0.6.tar.gz
[root@server1 pub]# cd keepalived-2.0.6
[root@server1 keepalived-2.0.6]# yum install -y openssl-devel.x86_64  gcc  安裝keepalived的依賴包
[root@server1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV

2.編譯源碼包

[root@server1 keepalived-2.0.6]# make
[root@server1 keepalived-2.0.6]# make install

3.製作軟連接方便keepalived的管理於設置,並給加本執行權限

[root@server1 init.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server1 sysconfig]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 etc]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@server1 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 ~]#chomd +x /usr/local/keepalived/etc/rc.d/init.d/keepalived

4.將server1中編譯號的源碼包發送到server4中,在server4中同樣製作軟連接

[root@server1 local]# scp -r keepalived/ [email protected]:/usr/local
server4:
[root@server4 keepalived]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server4 keepalived]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server4 keepalived]#  ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@server4 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server4 keepalived]# ll /etc/init.d/keepalived
lrwxrwxrwx 1 root root 48 Sep 23 14:24 /etc/init.d/keepalived -> /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server4 keepalived]# ll /usr/local/keepalived/etc/keepalived/
total 8
-rw-r--r-- 1 root root 3550 Sep 23 14:23 keepalived.conf
drwxr-xr-x 2 root root 4096 Sep 23 14:23 samples
[root@server4 keepalived]# ll /usr/local/keepalived/etc/rc.d/init.d/keepalived
-rwxr-xr-x 1 root root 1308 Sep 23 14:23 /usr/local/keepalived/etc/rc.d/init.d/keepalived

問題:訪問請求超時,重啓了虛擬機後虛擬ip失效
[root@foundation61 lvs]# curl 172.25.254.100
先檢查網通ping,
[root@foundation61 lvs]# arp -an |grep 100
? (172.25.254.100) at <incomplete> on br0
? (172.25.254.100) at 52:54:00:3b:b0:e6 [ether] on br0
mac地址查看
解決:
[root@server2 ~]# ip addr add 172.25.254.100/32 dev eth0
再次訪問:
[root@foundation254 lvs]# curl 172.25.254.100
<h1>bbs.westos.org-server3</h1>
[root@foundation254 lvs]# curl 172.25.254.100
<h> www.westos.org-server2</h>
主備控制1和4互爲主備

[root@server2 ~]# yum install vsftpd -y
[root@server2 ~]# /etc/init.d/vsftpd start
[root@server2 ftp]# touch server2
[root@server2 ftp]# ip addr add 172.25.254.200/32 dev eth0
[root@server2 ftp]# vim /etc/sysconfig/arptables
[0:0] -A IN -d 172.25.254.100 -j DROP
[0:0] -A IN -d 172.25.254.200 -j DROP
[0:0] -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.2
[0:0] -A OUT -s 172.25.254.200 -j mangle --mangle-ip-s 172.25.254.2
[root@server2 ftp]# /etc/init.d/arptables_jf restart
[root@server2 ftp]# arptables -L


server3同做上面步驟
server1:
配置文件 vim   /usr/local/keepalived/etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 254
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100
        172.25.254.200
    }
}

virtual_server 172.25.254.200 21 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
persistence_timeout 50
    protocol TCP

    real_server 172.25.254.2 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.3 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

1,4互爲主從

server1:
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 254   #唯一
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100
    }
}
ivrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 1254
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200
    }
}


scp 到server4

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 254
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100
    }
}
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 1254
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200
    }
}

測試:
 

[root@foundation61 lvs]# lftp 172.25.254.200
lftp 172.25.254.200:~> ls
Interrupt                                    
lftp 172.25.254.200:~> lftp 172.25.254.200
lftp 172.25.254.200:~> exit
[root@foundation61 lvs]# lftp 172.25.254.200
lftp 172.25.254.200:~> ls
drwxr-xr-x    2 0        0            4096 Feb 12  2013 pub
-rw-r--r--    1 0        0               0 Sep 23 08:13 server2
lftp 172.25.254.200:/>
lftp 172.25.254.200:/> exit
[root@foundation61 lvs]# lftp 172.25.254.200
lftp 172.25.254.200:~> ls              
drwxr-xr-x    2 0        0            4096 Feb 12  2013 pub
-rw-r--r--    1 0        0               0 Sep 23 08:12 server3
lftp 172.25.254.200:/>
lftp 172.25.254.200:/> exit

具體原理可參考:http://www.mamicode.com/info-detail-1488579.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章