目錄
1、keepalived軟件介紹
2、keepalived的安裝及VRRP的實現
3、總結
1、keepalived軟件介紹
keepalived是由c語言編寫的一個路徑選擇軟件,是IPVS的一個擴展性項目,爲IPVS提供高可用性(故障轉移)特性,它的高可用性是通過VRRP協議實現的,並實現了對負載均衡服務器池中的real server進行健康狀態檢測,當real server不可用時,自身實現了故障的隔離,這彌補了IPVS不能對real server服務器進行健康檢測的不足,這也是keepalived運用最廣泛的場景,當然keepalived不只限於實現IPVS的高可用。
1.1、keepalived軟件架構
圖片來自“http://www.keepalived.org/documentation.html”
下部分是內核空間,有IPVS和NETLINK兩個部分組成,IPVS提供IP虛擬網絡,而NETLINK提供高級的路由功能和其他相關的網絡功能。上部分是用戶空間,由這些組件來完成具體的功能,在覈心組件中:
1、WatchDog:實現對healthchecking和VRRP進程的監控,如果子進程非法停止,父進程會重啓子進程。
2、 Checkers 負責real server的 healthchecking,是 keepalived 最主要的功能,它實時對real server進行測試,判斷其是否存活後實現對lvs規則的添加或刪除。可利用OSI模型中的第四、五、七層來進行測試。healthchecking進程由父進程監控一個獨立的進程運行。
3、 VRRP Stack 負責負載均衡器之間的失敗切換( FailOver),由父進程監控一個獨立的進程運行。
4、 IPVS wrapper 用來發送設定的規則到內核 ipvs 代碼。
5、Netlink Reflector 用來設定 vrrp 的 vip 地址等。
爲了keepalived的健狀性和穩定性,keepalived啓動後運行了三個守護進程,一個父進程,兩個child進程,父進程用於監控兩個child進程,兩個child進程中一個是vrrp child,一個是healthchecking child。
2、keepalived的安裝及VRRP的實現
VRRP(Virtual Router Redundancy Protocol)全稱爲“虛擬路由冗餘協議”,對VRRP這裏不做過多的闡述,可自行google瞭解,簡單說來就是一種能在多個運行VRRP的設備上使用一個虛擬的IP地址對外提供服務,這個虛擬的IP地址綁定在這一組運行了VRRP組設備中的MASTER節點上,當這個節點發生故障時,其他的BACKUP節點中的一個能夠接管出現故障的設備,使服務不會中斷。運行VRRP協議的設備一般是專業的路由設備,比如華爲、銳捷、CISCO的路由產品,而keepalived這個軟件把VRRP協議在linux主機上得以實現,使linux主機也具備了高可用的特性。下邊讓我們來看一下keepalived是怎樣實現VRRP的,準確的說應該是VRRP v2版本。
2.1、keepalived的安裝
在現在主流的centos系統中,keepalived已被收錄進yum源,可直接用yum進行安裝,只是得到的版本不是最新的穩定版,此處以目前最穩定版的編譯安裝方式作介紹。目前最穩定的是"keepalived-1.2.16.tar.gz"。
[root@nod1 software]# tar xf keepalived-1.2.16.tar.gz [root@nod1 software]# cd keepalived-1.2.16 [root@nod1 keepalived-1.2.16]# ls AUTHOR ChangeLog configure.in COPYING genhash install-sh keepalived.spec.in Makefile.in TODO bin configure CONTRIBUTORS doc INSTALL keepalived lib README VERSION
在編譯時可能會有決依賴關係,一般要安裝以下這幾個包: libnl-devel openssl-devel
[root@nod1 keepalived-1.2.16]# ./configure --prefix=/usr/local/keepalived 編譯完成後會出現如下信息: Keepalived configuration ------------------------ Keepalived version : 1.2.16 Compiler : gcc Compiler flags : -g -O2 -DFALLBACK_LIBNL1 Extra Lib : -lssl -lcrypto -lcrypt -lnl Use IPVS Framework : Yes IPVS sync daemon support : Yes IPVS use libnl : Yes fwmark socket support : Yes Use VRRP Framework : Yes Use VRRP VMAC : Yes SNMP support : No SHA1 support : No Use Debug flags : No [root@nod1 keepalived-1.2.16]# make && make install [root@nod1 keepalived]# pwd /usr/local/keepalived [root@nod1 keepalived]# ll total 16 drwxr-xr-x 2 root root 4096 May 26 08:32 bin drwxr-xr-x 5 root root 4096 May 26 08:32 etc drwxr-xr-x 2 root root 4096 May 26 08:32 sbin drwxr-xr-x 3 root root 4096 May 26 08:32 share [root@nod1 keepalived-1.2.16]# /usr/local/keepalived/sbin/keepalived -v Keepalived v1.2.16 (05/26,2015)
安裝好後,先來測試一下看keepalived是否能正常啓動:
[root@nod1 keepalived]# /usr/local/keepalived/sbin/keepalived -D [root@nod1 keepalived]# ps aux | grep keepalived root 8130 0.0 0.2 44844 1032 ? Ss 08:43 0:00 /usr/local/keepalived/sbin/keepalived -D root 8131 0.1 0.4 47072 2384 ? S 08:43 0:00 /usr/local/keepalived/sbin/keepalived -D root 8132 2.3 0.3 46948 1560 ? S 08:43 0:00 /usr/local/keepalived/sbin/keepalived -D root 8143 0.0 0.1 103236 860 pts/0 S+ 08:43 0:00 grep keepalived #出現了三個守護進程,用以下命令可看出三個進程之間的關聯: [root@nod1 keepalived]# pstree | grep keepalived |-keepalived---2*[keepalived]
2.2、VRRP的實現
[root@nod1 keepalived]# pwd /usr/local/keepalived [root@nod1 keepalived]# ls bin etc sbin share [root@nod1 keepalived]# tree etc -L 3 etc ├── keepalived │ ├── keepalived.conf │ └── samples │ ├── client.pem │ ├── dh1024.pem │ ├── keepalived.conf.fwmark │ ├── keepalived.conf.HTTP_GET.port │ ├── keepalived.conf.inhibit │ ├── keepalived.conf.IPv6 │ ├── keepalived.conf.misc_check │ ├── keepalived.conf.misc_check_arg │ ├── keepalived.conf.quorum │ ├── keepalived.conf.sample │ ├── keepalived.conf.SMTP_CHECK │ ├── keepalived.conf.SSL_GET │ ├── keepalived.conf.status_code │ ├── keepalived.conf.track_interface │ ├── keepalived.conf.virtualhost │ ├── keepalived.conf.virtual_server_group │ ├── keepalived.conf.vrrp │ ├── keepalived.conf.vrrp.localcheck │ ├── keepalived.conf.vrrp.lvs_syncd │ ├── keepalived.conf.vrrp.routes │ ├── keepalived.conf.vrrp.scripts │ ├── keepalived.conf.vrrp.static_ipaddress │ ├── keepalived.conf.vrrp.sync │ ├── root.pem │ └── sample.misccheck.smbcheck.sh ├── rc.d │ └── init.d │ └── keepalived └── sysconfig └── keepalived
從上邊的輸出信息中可知在etc這個目錄下有個keepalived.conf配置文件,在samples目錄下有一堆的配置樣例,在etc/rc.d/init.d/下還有一個啓動腳本。
下邊來測試keepalived軟件的VRRP的實現,我這裏有兩個節點,一個ip爲192.168.0.200,另一個ip爲192.168.0.201,兩個節點都以編譯安裝好keepalived軟件。
一個完善的keepalived.conf的配置文件一般分爲三個配置塊,一是全局定義塊,二是vrrp實例定義塊,三是虛擬服務器定義塊,在各配置段中還有一些了配置段,而有些配置段不是必須的。如果只是想實現vrrp,那配置文件(192、168.0.200節點上)中只保留以下內容即可:
global_defs { router_id LVS_VRRP_1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 123 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.222 } }
註釋:
在全局定義塊中“router_id”是表示運行vrrp主機的惟一標識,在vrrp實例塊中“state MASTER”表示主機運行後的角色是MASTER。
“ virtual_router_id 123”表示虛擬路由標識,表示在"VI_1"這個vrrp實例中的一個唯一標識,在主節點和備節點中這個號碼是相同的,且在整個VRRP中也是唯一的,號碼取值範圍爲0-255。
“ priority 150 ”表示vrrp實例中的優先級,數字越大,優先級越高。
在備節點(192.168.0.201)上的配置如下:
[root@nod2 ~]# vim /usr/local/keepalived/etc/keepalived/keepalived.conf global_defs { router_id LVS_VRRP_2 !與主節點不同 } vrrp_instance VI_1 { state BACKUP !與主節點不同 interface eth0 virtual_router_id 123 priority 140 ! 小於主節點優先級 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.222 } }
備節點上一般有三處與主節點不同。
兩個節點的配置文件都準備好後,就可以啓動keepalived了,如下:
[root@nod2 ~]# /usr/local/keepalived/sbin/keepalived -D -f /usr/local/keepalived/etc/keepalived/keepalived.conf
選項“-D”:表示把啓動信息打印到日誌記錄,即/var/log/messages
選項“-f”:表示讀取的配置文件,默認時keepalived會去讀取/etc/keepalived/keepalived.conf文件
[root@nod1 ~]# /usr/local/keepalived/sbin/keepalived -D -f /usr/local/keepalived/etc/keepalived/keepalived.conf
接下來驗證keepalived是否正常工作,因爲在啓動時用了“-D”選項,所以我們去查看/var/log/messages文件即可:
下邊是nod1上的日誌輸出信息:
May 26 09:06:15 nod1 Keepalived[8229]: Starting Keepalived v1.2.16 (05/26,2015) May 26 09:06:15 nod1 Keepalived[8230]: Starting Healthcheck child process, pid=8231 May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Netlink reflector reports IP 192.168.0.200 added May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Netlink reflector reports IP fe80::20c:29ff:fe92:a73d added May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Netlink reflector reports IP fe80::20c:29ff:fe92:a747 added May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Registering Kernel netlink reflector May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Registering Kernel netlink command channel May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Registering gratuitous ARP shared channel May 26 09:06:15 nod1 Keepalived[8230]: Starting VRRP child process, pid=8232 May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Opening file '/usr/local/keepalived/etc/keepalived/keepalived.conf'. May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Configuration is using : 61889 Bytes May 26 09:06:15 nod1 Keepalived_vrrp[8232]: Using LinkWatch kernel netlink reflector... May 26 09:06:16 nod1 Keepalived_vrrp[8232]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)] May 26 09:06:16 nod1 Keepalived_vrrp[8232]: VRRP_Instance(VI_1) Transition to MASTER STATE May 26 09:06:16 nod1 Keepalived_vrrp[8232]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election May 26 09:06:17 nod1 Keepalived_vrrp[8232]: VRRP_Instance(VI_1) Entering MASTER STATE ..... May 26 09:06:17 nod1 Keepalived_vrrp[8232]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.0.222
從日誌輸出可知,nod1已成爲了MASTER,且把192.168.0.222這個虛擬地址配置在了eth0上。
以下是nod2上的日誌輸出信息:
May 26 09:05:32 nod2 Keepalived[2819]: Starting Keepalived v1.2.16 (05/26,2015) May 26 09:05:32 nod2 Keepalived[2820]: Starting Healthcheck child process, pid=2821 May 26 09:05:32 nod2 Keepalived[2820]: Starting VRRP child process, pid=2822 May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Netlink reflector reports IP 192.168.0.201 added May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Netlink reflector reports IP fe80::20c:29ff:fec6:f77a added May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Registering Kernel netlink reflector May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Registering Kernel netlink command channel May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Netlink reflector reports IP 192.168.0.201 added May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Netlink reflector reports IP fe80::20c:29ff:fec6:f77a added May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Registering Kernel netlink reflector May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Registering Kernel netlink command channel May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Registering gratuitous ARP shared channel May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Opening file '/usr/local/keepalived/etc/keepalived/keepalived.conf'. May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Configuration is using : 6120 Bytes May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Opening file '/usr/local/keepalived/etc/keepalived/keepalived.conf'. May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Configuration is using : 61761 Bytes May 26 09:05:32 nod2 Keepalived_vrrp[2822]: Using LinkWatch kernel netlink reflector... May 26 09:05:32 nod2 Keepalived_healthcheckers[2821]: Using LinkWatch kernel netlink reflector... May 26 09:05:32 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Entering BACKUP STATE May 26 09:05:32 nod2 Keepalived_vrrp[2822]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)] May 26 09:05:36 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Transition to MASTER STATE May 26 09:05:37 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Entering MASTER STATE May 26 09:05:37 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) setting protocol VIPs. May 26 09:05:37 nod2 Keepalived_healthcheckers[2821]: Netlink reflector reports IP 192.168.0.222 added May 26 09:05:37 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.0.222 May 26 09:05:39 nod2 ntpd[1086]: Listen normally on 7 eth0 192.168.0.222 UDP 123 May 26 09:05:39 nod2 ntpd[1086]: peers refreshed May 26 09:05:42 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.0.222 May 26 09:06:41 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Received higher prio advert May 26 09:06:41 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) Entering BACKUP STATE May 26 09:06:41 nod2 Keepalived_vrrp[2822]: VRRP_Instance(VI_1) removing protocol VIPs.
因我是先啓動的nod2節點,再啓動的nod1節點,所以在nod2的日誌輸出中看到它先把自己設定爲了MASTER角色,然後當nod1啓動後,nod2接收到了一個優先級比自己高的通知,nod2再把自己降級爲BACKUP。
再去nod1上查看一相網卡上是否配置了虛擬IP,記住keepalived配置的虛擬IP,用ifconfig命令是無法查看的:
[root@nod1 keepalived]# ip add list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:92:a7:3d brd ff:ff:ff:ff:ff:ff inet 192.168.0.200/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.222/32 scope global eth0 inet6 fe80::20c:29ff:fe92:a73d/64 scope link valid_lft forever preferred_lft forever
最後再來驗證一下VRRP是否能實現高可用,在nod1節點上停止keepalived進程:
[root@nod1 keepalived]# killall keepalived [root@nod1 keepalived]# ip add list #虛擬ip地址已被釋放 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:92:a7:3d brd ff:ff:ff:ff:ff:ff inet 192.168.0.200/24 brd 192.168.0.255 scope global eth0 inet6 fe80::20c:29ff:fe92:a73d/64 scope link valid_lft forever preferred_lft forever [root@nod2 ~]# ip add list #nod2上被配置了虛擬ip 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:c6:f7:7a brd ff:ff:ff:ff:ff:ff inet 192.168.0.201/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.222/32 scope global eth0 inet6 fe80::20c:29ff:fec6:f77a/64 scope link valid_lft forever preferred_lft forever
以上測試證明了keepalived利用VRRP實現了高可用的功能。
2.3、keepalived.conf配置段詳解
上邊已說到過keepalived.conf一般由全局配置段、vrrp實例配置段、虛擬服務配置段組成,而各配置段中還可以嵌套另外的配置段,而有些配置段又不是必須的。虛擬服務配置是專門針對LVS設計的,如果你所在的環境不是讓LVS具有高可用性,那虛擬服務配置段就可以不要,如果你想讓keepalived每一次狀態的改變(MASTER-BACKUP間的轉變)都以郵件的方式通知管理員,那可增加關於郵件報警相關的配置段,總之,keepalived.conf的配置十分靈活,下邊對常用的配置進行說明。
下邊以一個實例說明:
!全局配置段 global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server smtp.163.com #定義所使用的smtp服務器 smtp_connect_timeout 30 router_id NGINX_NUM1 #vrrp路由的唯一標識,在各服務器上運行vrrp的主機上應該是唯一的 } ! vrrp的腳本檢測配置段,以後邊vrrp實例中調用 vrrp_script chk_nginx { script "killall -0 nginx" #定義了一個檢測nginx進程是否存在的塊 interval 1 #檢測的間隔時間 weight -5 #如果nginx進程不存在,剛把相應的vrrp實例的優先級減去5 fall 2 #如果兩次檢測nginx進程都不在,則認爲nginx不可用 rise 1 #如果1次檢測到nginx進程,則認爲nginx可用 } ! vrrp實例配置段 vrrp_instance VI_1 { state MASTER #MASTER或BACKUP一定要大寫 interface eth0 virtual_router_id 10 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 111111 } virtual_ipaddress { 192.168.0.222 } track_script { chk_nginx #這裏調用了上邊定義的chk_nginx } notify_master "/bin/sh /etc/keepalived/scripts/notify.sh master" #這裏定義當vrrp轉變爲master角色時所要進行的操作,這裏執行了一個腳本,腳本的內容就是向管理員發送了郵件 notify_backup "/bin/sh /etc/keepalived/scripts/notify.sh backup" #這裏定義當vrrp轉變爲backup角色時所要進行的操作 notify_fault "/bin/sh /etc/keepalived/scripts/notify.sh fault" } !lvs虛擬服務配置段 virtual_server 192.168.0.222 80 { delay_loop 6 lb_algo rr #定義調度算法 lb_kind DR #定義lvs的模型 nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP sorry_server 127.0.0.1 80 real_server 192.168.0.202 80 { #定義一個real server weight 1 HTTP_GET { #定義real server的檢測機制,這裏是七層檢測機制,如果後端的real server不是運行的http服務,那可用以TCP_CHECK url { path / status_code 200 } connect_timeout 3 #健康檢測時的超時時間 nb_get_retry 3 #重試次數 delay_before_retry 3 #延遲重試,如果一次檢測中不成功,則延遲此值後再重試 } } real_server 192.168.0.203 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } }
從這個配置文件中也可看出keepalived不僅可以爲LVS提供高可用,也可以給nginx做高可用,只是keepalived和LVS結合時自身帶了健康檢測的機制,這也是healthcheck子進程所在的意義,而對非LVS做高可用時則需要藉助vrrp_script這個配置段和相應的腳本來實現。
3、總結
至此,keepalived的軟件架構,安裝及配置文件的結構進行了簡單說明,通過此博文可知,keepalived在與LVS結合時是最佳選擇,因爲keepalived自帶了對real server的健康檢測機制,這正好彌補了LVS的不足,而要知道的是keepalived不僅僅適用與LVS結合實現高可用,在其他需要高可用的環境keepalived依然可用,但總結起來keepalived更適用與作爲負載均衡調度器應用類的高可用方案,比如nginx作爲反向代理時,haproxy等這樣的應用,在接下來的博客中將會以兩個實例來說明利用keepalived實現nginx和lvs的高可用。