先修改配置文件,添加常用的幾項
主節點服務器
vim /etc/my.cof.d/server.cnf
[mysqld]
innodb_file_per_table=ON
skip_name_resolve=ON
server_id=1
log-bin=master-log
然後在從節點服務器上配置
[mysqld]
innodb_file_per_table=ON
skip_name_resolve=ON
server_id=11
relay_log=relay-log
read_only=ON
注意:從服務器的mysql 版本可以比主節點的高,但不能倒過來
然後就可以啓動服務了
在主節點上授權從節點過來拉取二進制日誌的用戶權限
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'192.168.31.%' IDENTIFIED BY 'replpass';
Query OK, 0 rows affected, 1 warning (0.04 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
然後查看當前的LSN的位置,從當前位置開始複製即可
mysql> SHOW MASTER STATUS;
然後在從節點連接到主節點上
CHANGE MASTER TO MASTER_HOST='192.168.31.203',MASTER_USER='repluser',MASTER_PASSWORD='replpass',
MASTER_PORT=3306,MASTER_LOG_FILE='master-log.000002',MASTER_LOG_POS=320;
可以查看從節點狀態
SHOW SLAVE STATUS;
沒有問題的話,接下來我們就可以啓動線程了
從節點上有兩個線程
IO_THREAD | SQL_THREAD
一個是從主節點上拉取二進制日誌的線程,一個是將中繼日誌中的數據恢復到數據庫中的線程這裏我們兩個都啓動START SLAVE;
但是啓動報錯
[root@lvq-7-4-2 ~]# tail /var/log/mariadb/mariadb.log
181214 16:41:52 [ERROR] Failed to open the relay log './mariadb-relay-bin.000003' (relay_log_pos 408)
181214 16:41:52 [ERROR] Could not find target log during relay log initialization
181214 16:41:52 [ERROR] Failed to initialize the master info structure
181214 16:41:52 [Note] Event Scheduler: Loaded 0 events
181214 16:41:52 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.5.56-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
181214 16:42:16 [ERROR] Failed to open the relay log './mariadb-relay-bin.000003' (relay_log_pos 408)
181214 16:42:16 [ERROR] Could not find target log during relay log initialization
181214 16:48:39 [ERROR] Failed to open the relay log './mariadb-relay-bin.000003' (relay_log_pos 408)
181214 16:48:39 [ERROR] Could not find target log during relay log initialization
通過網上查找答案發現是在從服務器設置的時候多次輸入相同的命令,導致數據庫目錄下生成了多餘的文件干擾刪除 /var/lib/mysql/路徑下 ‘master.info’ ‘mysqld-relay-bin.*’ ‘relay-log.info’ ‘relay-log-index.*’
rm -rf ./master.info,rm -rf *relay*
接下來我們演示基於Mysql Proxy中間件的使用,計劃一臺Proxy,一臺主節點,兩臺從節點。新加的從節點沒有數據,先從主節點備份過來
mysqldump -uroot --all-databases -R -E --triggers -x --master-data=2 > all.sql
scp all.sql [email protected]:/root
systemctl start mariadb
mysql < all.sql
less all.sql #查看是從哪個位置開始複製的
CHANGE MASTER TO MASTER_HOST='192.168.31.203',MASTER_USER='repluser',MASTER_PASSWORD='replpass',MASTER_LOG_FILE='master-log.000001',MASTER_LOG_POS=732;
接下來在讀寫分離主機上安裝ProxySql
首先需要數據庫授權一個賬號給ProxySql,因爲proxysql也是後端mysql的客戶端。在主節點上授權從服務器上都會複製過去
GRANT ALL ON *.* TO 'myadmin'@'192.168.31.%' IDENTIFIED BY 'mypass';
FLUSH PRIVILEGES;
配置文件
datadir="/var/lib/proxysql"
admin_variables=
{
admin_credentials="admin:admin"
mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"
}
mysql_variables=
{
threads=4
max_connections=2048
default_query_delay=0
default_query_timeout=36000000
have_compress=true
poll_timeout=2000
interfaces="0.0.0.0:3306;/tmp/mysql.sock"
default_schema="information_schema"
stacksize=1048576
server_version="5.5.30"
connect_timeout_server=3000
monitor_history=600000
monitor_connect_interval=60000
monitor_ping_interval=10000
monitor_read_only_interval=1500
monitor_read_only_timeout=500
ping_interval_server=120000
ping_timeout_server=500
commands_stats=true
sessions_sort=true
connect_retries_on_failure=10
}
mysql_servers =
(
{
address = "192.168.31.203" # no default, required . If port is 0 , address is interpred as a Unix Socket Domain
port = 3306 # no default, required . If port is 0 , address is interpred as a Unix Socket Domain
hostgroup = 0 # no default, required
weight = 1 # default: 1
compression = 0 # default: 0
},
{
address = "192.168.31.201"
port = 3306
hostgroup = 1
status = "ONLINE" # default: ONLINE
weight = 1 # default: 1
compression = 0 # default: 0
},
{
address = "192.168.31.204"
port = 3306
hostgroup = 1
status = "ONLINE" # default: ONLINE
weight = 1 # default: 1
compression = 0 # default: 0
}
)
mysql_users:
(
{
username = "myadmin"
password = "mypass"
default_hostgroup = 0
max_connections=1000
default_schema="mydb"
active = 1
}
)
mysql_query_rules:
(
)
scheduler=
(
)
mysql_replication_hostgroups=
(
{
writer_hostgroup=0
reader_hostgroup=1
}
)
啓動服務然後連接數據庫
mysql -h192.168.31.200 -umyadmin -pmypass
然後進行實驗,創建一張表,如果主從節點上都有了,就證明操作被路由到了主節點上
mysql> show tables;
+----------------+
| Tables_in_mydb |
+----------------+
| tbl1 |
| tbl2 |
| tbl3 |
| tbl5 |
| tbl7 |
+----------------+
5 rows in set (0.01 sec)
mysql> CREATE TABLE tbl8(id INT)
-> ;
Query OK, 0 rows affected (0.16 sec)
主節點上:
MariaDB [mydb]> show tables;
+----------------+
| Tables_in_mydb |
+----------------+
| tbl1 |
| tbl2 |
| tbl3 |
| tbl5 |
| tbl7 |
| tbl8 |
+----------------+
從節點上:
MariaDB [mydb]> show tables;
+----------------+
| Tables_in_mydb |
+----------------+
| tbl1 |
| tbl2 |
| tbl3 |
| tbl5 |
| tbl7 |
| tbl8 |
+----------------+
6 rows in set (0.00 sec)
使用命令SHOW PROCESSLIST\G;
可以查看後端有幾臺服務器連接中
ProxySql還有管理接口可以使用命令連接mysql -uadmin -padmin -hlocalhost -S /tmp/proxysql_admin.sock
ProxySql支持動態配置可以修改數據庫的表,四個庫的表都是一樣的內容
mysql> SHOW TABLES;
+--------------------------------------+
| tables |
+--------------------------------------+
| global_variables |
| mysql_collations |
| mysql_query_rules |
| mysql_replication_hostgroups |
| mysql_servers |
| mysql_users |
| runtime_global_variables |
| runtime_mysql_query_rules |
| runtime_mysql_replication_hostgroups |
| runtime_mysql_servers |
| runtime_mysql_users |
| runtime_scheduler |
| scheduler |
+--------------------------------------+
但這時候我們不得不考慮一個問題,如果後端的主節點DOWN掉了怎麼辦呢,就算一臺從節點升爲主節點,但可能與其他的從節點數據不一致,這時候就用到了MHA
MHA工作在單獨的一臺服務器上manager,每一臺主從服務器都有一個node幫助manager管理,一但一臺主機down掉了,node會在最新的從節點升爲主節點之前將所有節點的數據同步過來,合併起來,避免數據不一致,同時需要無密鑰SSH登陸四臺主機
這裏我們直接使用ProxySql主機來做MHA的manager
首先要修改各節點的mysql配置文件,因爲每個節點都有可能成/爲主/從節點,所以都要啓用二進制日誌和中繼日誌
修改主節點配置文件添加
relay_log=relay-log
修改從節點配置文件添加
log_bin=master-log
relay_log_purge=0
read_only=1
接下來配置SSH
在manager節點上
ssh-keygen -t rsa -P ''
先複製給本機,再複製給其他主機,這樣用的就都是同一種密鑰
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
scp -p .ssh/authorized_keys .ssh/id_rsa{,.pub} [email protected]:/root/.ssh/
scp -p .ssh/authorized_keys .ssh/id_rsa{,.pub} [email protected]:/root/.ssh/
scp -p .ssh/authorized_keys .ssh/id_rsa{,.pub} [email protected]:/root/.ssh/
然後在manager上安裝
mha4mysql-manager-0.56-0.el6.noarch.rpm
mha4mysql-node-0.56-0.el6.noarch.rpm
在node上安裝
mha4mysql-node-0.56-0.el6.noarch.rpm
全局配置文件默認爲/etc/masterha_default.cnf默認不存在
也可通過application的配置來提供各服務器的默認配置信息,每個application的配置文件路徑自定義,這裏我們使用/etc/masterha/app1.cnf
先在主節點創建一個用戶用於manager管理,使用proxysql的也可以
MariaDB [(none)]> GRANT ALL ON *.* TO 'mhaadmin'@'192.168.31.%' IDENTIFIED BY 'mhapass';
MariaDB [(none)]> FLUSH PRIVILEGES;
配置文件
[server default]
user=mhaadmin
password=mhapass
manager_workdir=/data/masterha/app1
manager_log=/data/masterha/app1/manager.log
remote_workdir=/data/masterha/app1
ssh_use=root
repl_user=repluser
repl_pas
sword=replpass
ping_interval=1
[server1]
hostname=192.168.31.201
candidate_master=1
[server2]
hostname=192.168.31.203
candidate_master=1
[server3]
hostname=192.168.31.204
candidate_master=1
接下來測試通信是否正常masterha_check_ssh --conf=/etc/masterha/app1.cnf
測試主從複製健康狀態masterha_check_repl --conf=/etc/masterha/app1.cnf
報故障
[error][/usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln622] Master 192.168.31.200:3306 from which slave 192.168.31.203(192.168.31.203:3306)
replicates is not defined in the configuration file!
網上找到原因是由於新master此時還是存有指向老master的slave狀態的,所以mha將新master當成了一個slave,而新master的slave所指向的是舊的且已經宕機的老master,所以報錯,解決辦法是清除新master的salve信息。
STOP SLAVE;
RESET SLAVE ALL;
又報錯
User repluser does not exist or does not have REPLICATION SLAVE privilege!
Other slaves can not start replication from this host.
原因應該是repluser在創建時沒有同步,從新創建一個用戶
GRANT ALL ON *.* TO 'repladmin'@'192.168.31.%' IDENTIFIED BY 'replpass';
FLUSH PRIVILEGES;
vim /etc/masterha/app1.cnf
檢測成功了
Sat Dec 15 02:36:48 2018 - [info] Checking replication health on 192.168.31.201..
Sat Dec 15 02:36:48 2018 - [info] ok.
Sat Dec 15 02:36:48 2018 - [info] Checking replication health on 192.168.31.204..
Sat Dec 15 02:36:48 2018 - [info] ok.
Sat Dec 15 02:36:48 2018 - [warning] master_ip_failover_script is not defined.
Sat Dec 15 02:36:48 2018 - [warning] shutdown_script is not defined.
Sat Dec 15 02:36:48 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
接下來運行mha
nohup masterha_manager --conf=/etc/masterha/app1.cnf &> /data/masterha/app1/manager.log &
檢測狀態
[root@lvq-node1 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:21503) is running(0:PING_OK), master:192.168.31.203
停止
masterha_stop --conf=/etc/masterha/app1.cnf
一但故障了,從節點轉變成主節點後,主節點修復了想要從新加入
1、先將現在的主節點用mysqldump全量備份
2、用備份恢復
3、設置爲從節點指向主節點