Percona-mysql-5.5.38雙主複製&mmm配置

一、   說明

解決數據庫單mysql主節點的單點故障問題。便於數據庫的切換。

二、   原理

mysql的主從模式延伸而來,兩個mysql節點互爲主從,可讀可寫。

三、   測試環境描述

192.168.0.54db54                                 CentOS_6.5x64     Percona_mysql-5.5.38

192.168.0.108              db108              CentOS_6.5x64     Percona_mysql-5.5.38

四、   配置過程

1.    安裝mysql(可以使用之前寫的一鍵安裝腳本)

 

2.    檢查兩臺DB是否開啓bin_log

mysql>show variables like 'log_bin';

+---------------+-------+

|Variable_name | Value |

+---------------+-------+

|log_bin       | ON    |

+---------------+-------+

 

3.    兩臺服務器修改server-id,並重啓mysql

db54修改爲:server-id = 54

db108修改爲:server-id = 108

 

重啓:

# /etc/init.d/mysqlrestart

 

4.    兩臺服務器之間相互開啓3306端口

db54

-AINPUT -s 192.168.0.108/32 -m state --state NEW -m tcp -p tcp --dport 3306 -jACCEPT

db108

-AINPUT -s 192.168.0.54/32 -m state --state NEW -m tcp -p tcp --dport 3306 -jACCEPT

 

5.    db54爲主庫,db108爲從庫

5.1  主庫db54建立slave同步數據的用戶

mysql>grant replication client,replication slave on *.* to repl@'192.168.0.108'identified by '123456';

mysql>flush privileges;

5.2  清空日誌

mysql>flush master;

5.3  從庫db108指定主服務器

mysql>change master to master_host='192.168.0.54',master_user='repl',master_password='123456';

 

5.4  啓動從服務器進程並查看運行狀態

mysql>start slave;

mysql>show slave status\G

如果出現如下行,則表明正常啓動:

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

 

5.5  測試db54db108的主從同步

5.5.1       主庫db54上創建jibuqi數據庫:

mysql>create database jibuqi;

mysql>showdatabases;

5.5.2       從庫db108查看結果:

mysql>showdatabases;

結果正常。

 

6.    db108爲主庫,db54爲從庫

6.1  db54[mysqld]中配置文件中添加配置:

auto-increment-increment = 2                    #整個結構中服務器的總數

auto-increment-offset = 1                           #設定數據庫中自動增長的起點,兩個db不能相同,否則主鍵衝突

replicate-do-db = jibuqi                                                    #指定同步的數據庫,其他數據庫不同步

 

重啓mysql

# /etc/init.d/mysqlrestart

 

mysql>show master status;

+------------------+----------+--------------+------------------+

|File             | Position |Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

|mysql-bin.000003 |      107 |              |                  |

+------------------+----------+--------------+------------------+

6.2  db108創建同步數據的mysql用戶:

mysql>grant replication client,replication slave on *.* to repl@'192.168.0.108'identified by '123456';

mysql>flush privileges;

db108的配置文件[mysqld]中添加配置:

log-bin = mysql-bin

auto-increment-increment = 2

auto-increment-offset = 2  # db54不能相同

replicate-do-db = jibuqi

重啓mysql

# /etc/init.d/mysqlrestart

 

mysql>show master status;

+------------------+----------+--------------+------------------+

|File             | Position |Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

|mysql-bin.000005 |      107 |              |                  |

+------------------+----------+--------------+------------------+

 

6.3  db54db108分別指定對方爲自己的主數據庫:

db108服務器的指向:

mysql>stop slave;

mysql>change master to master_host='192.168.0.54',master_user='repl',master_password='123456',master_log_file='mysql-bin.000003',master_log_pos=107;

mysql>start slave;

 

db54服務器的指向:

mysql>change master to master_host='192.168.0.108',master_user='repl',master_password='123456',master_log_file='mysql-bin.000005',master_log_pos=107;

mysql>start slave;

 

6.4  測試:

db54jibuqi數據庫導入數據表api_pedometeraccount,檢查db108上是是否有相應table檢查結果正常)。

db108jibuqi數據庫導入數據表api_pedometerdevice,檢查db54上是是否有相應table檢查結果正常)。

 

                  至此,雙主同步的模式完成。

 

五、   mysql-MMMMaster-Master Replication Manager for MySQL

5.1 簡介

MMMMaster-Master Replication Manager for MySQLmysql主主複製管理器)關於mysql主主複製配置的監控、故障轉移和管理的一套可伸縮的腳本套件(在任何時候只有一個節點可以被寫入),這個套件也能對居於標準的主從配置的任意數量的從服務器進行讀負載均衡,所以你可以用它來在一組居於複製的服務器啓動虛擬ip,除此之外,它還有實現數據備份、節點之間重新同步功能的腳本。MySQL本身沒有提供replication failover的解決方案,通過MMM方案能實現服務器的故障轉移,從而實現mysql的高可用。MMM不僅能提供浮動IP的功能,更可貴的是如果當前的主服務器掛掉後,會將你後端的從服務器自動轉向新的主服務器進行同步複製,不用手工更改同步配置。這個方案是目前比較成熟的解決方案。詳情請看官網:http://mysql-mmm.org

 

5.2 結構說明

a. 服務器列表

服務器

主機名

IP

serverID

mysql版本

系統

master1

db1

192.168.1.19

54

mysql5.5

Centos 6.5

master2

db2

192.168.1.20

108

mysql5.5

Centos 6.5

 

b. 虛擬IP列表

VIP

Role

description

192.168.1.190

write

應用配置的寫入VIP

192.168.1.201

read

應用配置的讀入VIP

192.168.1.203

read

應用配置的讀入VIP

 

5.3 MMM的安裝(安裝在db1中,可以單獨準備一臺服務器來安裝)

a. 升級perl模塊

# yuminstall –y cpan

# cpan-i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perlMail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

這兩個File::Basename File::stat模塊好像安裝不上,要升級5.12.2纔可以,不安裝也可以使用;Net:ARP必須要安裝,要不然VIP會出不來的。

 

b. 下載並安裝mysql-mmm

# wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz-O mysql-mmm-2.2.1.tar.gz

# tar-zxvf mysql-mmm-2.2.1.tar.gz

# cdmysql-mmm-2.2.1

# make;makeinstall

 

c. mysql-mmm配置

直接修改配置文件

·  修改db1的配置文件:

mmm_agent.conf

# vim/etc/mysql_mmm/mmm_agent

  include mmm_common.conf

  this db1

 

mmm_common.conf

# vim/etc/mysql_mmm/mmm_common.conf

active_master_role      writer

 

<host default>

   cluster_interface       eth0

   pid_path               /var/run/mysql-mmm/mmm_agentd.pid

   bin_path               /usr/libexec/mysql-mmm/

   replication_user        repl

   replication_password    123456

   agent_user              mmm_agent

   agent_password          mmm_agent

</host>

 

<host db1>

   ip      192.168.1.19

   mode    master

   peer    db2

</host>

 

<host db2>

   ip      192.168.1.20

   mode    master

   peer    db1

</host>

 

<role writer>

   hosts   db1, db2

   ips     192.168.1.190

   mode    exclusive # exclusive是排他模式,任何時候只能一個host擁有該角色

</role>

 

<role reader>

   hosts   db1, db2

   ips     192.168.1.201,192.168.1.203

   mode    balanced # balanced均衡模式,可以多個host同時擁有此角色

</role>

 

mmm_mon.conf

# vim/etc/mysql_mmm/mmm_mon.conf

include mmm_common.conf

 

<monitor>

       ip                                             127.0.0.1

       pid_path                               /var/run/mmm_mond.pid

       bin_path                                /usr/lib/mysql-mmm/

       status_path                            /var/lib/misc/mmm_mond.status

       auto_set_online                        5 #自動切換的時間,單位爲秒

       ping_ips                               192.168.1.19, 192.168.1.20

</monitor>

 

<host default>

       monitor_user                   mmm_monitor

       monitor_password                mmm_monitor

</host>

 

debug 0

 

·  修改db2的配置文件:

mmm_agent.conf

# vim /etc/mysql_mmm/mmm_agent

  includemmm_common.conf

  thisdb2

 

mmm_common.conf的配置文件內容與db1的相同。

 

d. 啓動mmm程序:

db1啓動agentmon

# /etc/init.d/mysql-mmm-agent start

  Daemonbin: '/usr/sbin/mmm_agentd'

  Daemonpid: '/var/run/mmm_agentd.pid'

  StartingMMM Agent daemon... Ok

 

# /etc/init.d/mysql-mmm-monitor start

Daemon bin: '/usr/sbin/mmm_mond'

Daemon pid: '/var/run/mmm_mond.pid'

Starting MMM Monitor daemon: Baseclass package "Class::Singleton" is empty.

   (Perhaps you need to 'use' the module which defines that package first,

   or make that module available in @INC (@INC contains:/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).

 at/usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2

BEGIN failed--compilation abortedat /usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2.

Compilation failed in require at/usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.

BEGIN failed--compilation abortedat /usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.

Compilation failed in require at/usr/sbin/mmm_mond line 28.

BEGIN failed--compilation abortedat /usr/sbin/mmm_mond line 28.

Failed

啓動mysql-mmm-monitor失敗,修復方法:

# perl -MCPAN -e shell

Terminaldoes not support AddHistory.

 

cpanshell -- CPAN exploration and modules installation (v1.9402)

Enter'h' for help.

 

cpan[1]> Class::Singleton

Catchingerror: "Can't locate object method \"Singleton\" via package\"Class\" (perhaps you forgot to load \"Class\"?) at/usr/share/perl5/CPAN.pm line 375, <FIN> line 1.\cJ" at /usr/share/perl5/CPAN.pmline 391

         CPAN::shell() called at -e line 1

 

cpan[2]>Class

Unknownshell command 'Class'. Type ? for help.

 

cpan[3]> install Class::Singleton

CPAN:Storable loaded ok (v2.20)

Goingto read '/root/.cpan/Metadata'

  Database was generated on Thu, 27 Nov 201408:53:16 GMT

CPAN:LWP::UserAgent loaded ok (v5.833)

CPAN:Time::HiRes loaded ok (v1.9726)

Warning:no success downloading '/root/.cpan/sources/authors/01mailrc.txt.gz.tmp47425'.Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225

Fetchingwith LWP:

 http://www.perl.org/CPAN/authors/01mailrc.txt.gz

Goingto read '/root/.cpan/sources/authors/01mailrc.txt.gz'

............................................................................DONE

Fetchingwith LWP:

 http://www.perl.org/CPAN/modules/02packages.details.txt.gz

Goingto read '/root/.cpan/sources/modules/02packages.details.txt.gz'

  Database was generated on Fri, 28 Nov 201408:29:02 GMT

..............

  New CPAN.pm version (v2.05) available.

  [Currently running version is v1.9402]

  You might want to try

    install CPAN

    reload cpan

  to both upgrade CPAN.pm and run the newversion without leaving

  the current session.

 

 

..............................................................DONE

Fetchingwith LWP:

 http://www.perl.org/CPAN/modules/03modlist.data.gz

Goingto read '/root/.cpan/sources/modules/03modlist.data.gz'

DONE

Goingto write /root/.cpan/Metadata

Runninginstall for module 'Class::Singleton'

CPAN:Data::Dumper loaded ok (v2.124)

'YAML'not installed, falling back to Data::Dumper and Storable to read prefs'/root/.cpan/prefs'

Runningmake for S/SH/SHAY/Class-Singleton-1.5.tar.gz

CPAN:Digest::SHA loaded ok (v5.47)

Checksumfor /root/.cpan/sources/authors/id/S/SH/SHAY/Class-Singleton-1.5.tar.gz ok

Scanningcache /root/.cpan/build for sizes

............................................................................DONE

Class-Singleton-1.5/

Class-Singleton-1.5/Changes

Class-Singleton-1.5/lib/

Class-Singleton-1.5/lib/Class/

Class-Singleton-1.5/lib/Class/Singleton.pm

Class-Singleton-1.5/Makefile.PL

Class-Singleton-1.5/MANIFEST

Class-Singleton-1.5/META.yml

Class-Singleton-1.5/README

Class-Singleton-1.5/t/

Class-Singleton-1.5/t/singleton.t

CPAN: File::Temploaded ok (v0.22)

 

  CPAN.pm: Going to buildS/SH/SHAY/Class-Singleton-1.5.tar.gz

 

Checkingif your kit is complete...

Looksgood

Generatinga Unix-style Makefile

WritingMakefile for Class::Singleton

WritingMYMETA.yml and MYMETA.json

Couldnot read '/root/.cpan/build/Class-Singleton-1.5-42kiLS/MYMETA.yml'. Fallingback to other methods to determine prerequisites

cplib/Class/Singleton.pm blib/lib/Class/Singleton.pm

Manifying1 pod document

  SHAY/Class-Singleton-1.5.tar.gz

  /usr/bin/make -- OK

Warning(usually harmless): 'YAML' not installed, will not store persistent state

Runningmake test

PERL_DL_NONLAZY=1"/usr/bin/perl" "-MExtUtils::Command::MM""-MTest::Harness" "-e" "undef*Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t

t/singleton.t.. ok    

Alltests successful.

Files=1,Tests=29,  0 wallclock secs ( 0.01usr  0.01 sys +  0.01 cusr 0.00 csys =  0.03 CPU)

Result:PASS

  SHAY/Class-Singleton-1.5.tar.gz

  /usr/bin/make test -- OK

Warning(usually harmless): 'YAML' not installed, will not store persistent state

Runningmake install

Prepending/root/.cpan/build/Class-Singleton-1.5-42kiLS/blib/arch/root/.cpan/build/Class-Singleton-1.5-42kiLS/blib/lib to PERL5LIB for 'install'

Manifying1 pod document

Installing/usr/local/share/perl5/Class/Singleton.pm

Installing/usr/local/share/man/man3/Class::Singleton.3pm

Appendinginstallation info to /usr/lib64/perl5/perllocal.pod

  SHAY/Class-Singleton-1.5.tar.gz

  /usr/bin/make install  -- OK

Warning(usually harmless): 'YAML' not installed, will not store persistent state

 

cpan[4]>exit

Terminaldoes not support GetHistory.

Lockfileremoved.

 

#/etc/init.d/mysql-mmm-monitor start

Daemonbin: '/usr/sbin/mmm_mond'

Daemonpid: '/var/run/mmm_mond.pid'

StartingMMM Monitor daemon: Ok

 

db2啓動agent

# /etc/init.d/mysql-mmm-agent start

  Daemonbin: '/usr/sbin/mmm_agentd'

  Daemonpid: '/var/run/mmm_agentd.pid'

  StartingMMM Agent daemon... Ok

 

e. 修改防火牆,根據情況開放mmm端口(方法略)

f. db1db2添加監控授權用戶,用於檢測mysql狀態。

mysql> grant super,replicationclient,process on *.* to 'mmm_agent'@'192.168.1.20' identified by 'mmm_agent';

mysql> grant super,replication client,processon *.* to 'mmm_agent'@'192.168.1.19' identified by 'mmm_agent';

mysql > grant super,replicationclient,process on *.* to 'mmm_agent'@'localhost' identified by 'mmm_agent';

mysql > grant super,replicationclient,process on *.* to 'mmm_agent'@'127.0.0.1' identified by 'mmm_agent';

mysql > grant super,replicationclient,process on *.* to 'mmm_monitor'@'192.168.1.20' identified by 'mmm_monitor';

mysql > grant super,replication client,processon *.* to 'mmm_monitor'@'192.168.1.19' identified by 'mmm_monitor’;

mysql > grant super,replicationclient,process on *.* to 'mmm_monitor'@'localhost' identified by 'mmm_ monitor';

mysql > grant super,replicationclient,process on *.* to 'mmm_ monitor'@'127.0.0.1' identified by 'mmm_ monitor';

mysql > flush privileges;

 

 

g. 檢查狀態(在monitor所在的機器):

# mmm_control ping

OK: Pinged successfully!

 

# mmm_control show

 db1(192.168.1.19) master/ONLINE. Roles: reader(192.168.1.203),writer(192.168.1.190)

 db2(192.168.1.20) master/ONLINE. Roles: reader(192.168.1.201)

 

# mmm_control checks

db2 ping         [last change:2014/12/01 13:49:47]  OK

db2 mysql        [last change:2014/12/01 13:49:47]  OK

db2 rep_threads  [last change:2014/12/01 13:49:47]  OK

db2 rep_backlog  [last change:2014/12/01 13:49:47]  OK: Backlog is null

db1 ping         [last change:2014/12/01 13:49:47]  OK

db1 mysql        [last change:2014/12/01 13:49:47]  OK

db1 rep_threads  [last change:2014/12/01 13:52:19]  OK

db1 rep_backlog  [last change:2014/12/01 13:49:47]  OK: Backlog is null

 

# mmm_control help

Valid commands are:

   help                             - show this message

   ping                             - ping monitor

   show                             - show status

   checks [<host>|all [<check>|all]] - show checks status

   set_online <host>                - set host <host> online

   set_offline <host>               - set host <host> offline

   mode                             - print current mode.

   set_active                       - switch into active mode.

   set_manual                       - switch into manual mode.

   set_passive                      - switch into passive mode.

   move_role [--force] <role> <host> - move exclusive role<role> to host <host>

                                        (Onlyuse --force if you know what you are doing!)

set_ip <ip><host>                - set rolewith ip <ip> to host <host>

 

 

h. 測試切換

DB1上的信息如下:

#mmm_control show

  db1(192.168.1.19) master/ONLINE. Roles:reader(192.168.1.203), writer(192.168.1.190)

  db2(192.168.1.20) master/ONLINE. Roles:reader(192.168.1.201)

 

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:93:d2:50 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.19/23 brd 192.168.1.255scope global eth0

    inet 192.168.1.203/32 scope global eth0

    inet 192.168.1.190/32 scope global eth0

    inet6 fe80::20c:29ff:fe93:d250/64 scopelink

valid_lft forever preferred_lft forever

 

DB2上的信息:

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:9f:7c:c6 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.20/23 brd 192.168.1.255scope global eth0

    inet 192.168.1.201/32 scope global eth0

    inet6 fe80::20c:29ff:fe9f:7cc6/64 scopelink

valid_lft forever preferred_lft forever

 

停掉DB1上的mysql應用,看mmm是否會把所有vips切換到DB2

DB1上:

#/etc/init.d/mysql stop

Shuttingdown MySQL (Percona Server)..... SUCCESS!

 

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:93:d2:50 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.19/23 brd 192.168.1.255scope global eth0

    inet6 fe80::20c:29ff:fe93:d250/64 scopelink

valid_lft forever preferred_lft forever

 

 

DB2上:

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 00:0c:29:9f:7c:c6 brdff:ff:ff:ff:ff:ff

    inet 192.168.1.20/23 brd 192.168.1.255scope global eth0

    inet 192.168.1.201/32 scope global eth0

    inet 192.168.1.203/32 scope global eth0

    inet 192.168.1.190/32 scope global eth0

    inet6 fe80::20c:29ff:fe9f:7cc6/64 scopelink

valid_lft forever preferred_lft forever

 

                  vip切換成功。mmm的切換功能成功。

 

如果想實現mysql的讀寫分離,可以通過mysql_proxy實現。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章