Contents
PXC简介
PXC,全称Percona XtraDB Cluster,是一个开源的MySQL高可用解决方案。以下是关于PXC的简介:
1. 概述
- 定义:PXC是一个可以实时同步的MySQL集群,它基于广播write set和事务验证来实现多节点同时commit、冲突事务回滚等功能。
- 基础:PXC以开源Galera Cluster为基础,集成了Percona Server和XtraBackup。
- 目标:提供强数据一致性和高可用性。
2. 特性与优点
- 强一致性:PXC对待数据一致性非常严格,确保事务在所有节点上要么全部执行,要么全部不执行。
- 无同步延迟:与传统的主从复制模式相比,PXC解决了数据复制延迟问题,基本上可以达到实时同步。
- 多主复制:支持在任意节点进行写操作,提供真正的多主复制功能。
- 故障切换:由于支持多点写入,当数据库出现故障时,可以很容易地进行故障切换。
- 自动节点克隆:新增节点或进行停机维护时,galera cluster会自动拉取在线节点数据,确保集群数据的一致性。
3. 工作原理
- 客户端发起事务,在本地执行并完成后,广播产生的复制写集。
- 获取全局事务ID号,并传送到其他节点。
- 通过验证合并数据后,如果无冲突,则执行提交操作;否则,取消此次事务。
4. 端口说明
- 3306:数据库对外服务的端口号。
- 4444:请求SST(State Snapshot Transfer,全量传输)的端口。
- 4567:组成员之间进行沟通的端口号。
- 4568:用于传输IST(Incremental state Transfer,增量传输)。
5. 局限与劣势
- 存储引擎限制:复制功能目前仅支持InnoDB引擎,其他存储引擎的更改不会被复制。
- 写入效率:整体的写入效率取决于集群中最慢的一个节点。
6. 应用场景
- PXC适用于大型高频MySQL负载环境或需要强一致性和高性能的环境。
7. 与其他方案的区别
- 相比于MySQL MHA(Master High Availability),PXC提供了更强大、更灵活的高可用性解决方案,支持更大的负载和更高的事务吞吐量。
8.pxc节点数量
pxc集群节点数控制在最少3个、最多8个的范围内。
最少3个是为了防止脑裂现象,因为只有在两个节点的情况下才会出现脑裂。两个节点的通信出现问题,脑裂的表现就是输出任何命令,返回结果都是unkown command。
总的来说,PXC是一个功能强大、灵活可靠的MySQL高可用集群解决方案,适用于对数据库性能和数据一致性有严格要求的环境。
MHA的缺点
- 必要条件必须是一主多从结构
- 客户端访问必须连接vip地址且vip地址必须在主数据库服务器上
- 把坏掉的数据库服务器添加到集群里时,必须手动配置数据一致,将服务器添加为当前主服务器的从库、添加到集群里
安装配置
环境centos7,关闭防火墙了selinux,进行时间同步,主机名解析
软件版本
- percona-xtrabackup-24-2.4.29
-
Percona-XtraDB-Cluster-57
-
qpress-11-1
前期准备
PXC
集群部署,会自行安装 MySQL
服务,建议操作前卸载原来的 MySQL
yum -y remove mari*
数据重要记得备份,清理数据
rm -rf /var/lib/mysql/
rm -f /etc/my.cnf*
下载解安装压缩工具 qpress
、PXC
centos7安装包被移除了,只能其他地方获取【免费】qpress-11-1.el7.x86-64.rpm64位centos7内核4.9即装即用资源-CSDN文库
rpm -ivh qpress-11-1.el7.x86_64.rpm
yum -y localinstall -y percona-xtrabackup-24-2.4.29-1.el7.x86_64.rpm
yum localinstall -y Percona-XtraDB-Cluste*.rpm
其他节点同样安装
scp -r pxc 192.168.126.22:/root/software/
cd /root/software/pxc/
yum localinstall -y ./*
已安装:
Percona-XtraDB-Cluster-57.x86_64 0:5.7.32-31.47.1.el7 Percona-XtraDB-Cluster-57-debuginfo.x86_64 0:5.7.32-31.47.1.el7
Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.32-31.47.1.el7 Percona-XtraDB-Cluster-devel-57.x86_64 0:5.7.32-31.47.1.el7
Percona-XtraDB-Cluster-full-57.x86_64 0:5.7.32-31.47.1.el7 Percona-XtraDB-Cluster-garbd-57.x86_64 0:5.7.32-31.47.1.el7
Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.32-31.47.1.el7 Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.32-31.47.1.el7
Percona-XtraDB-Cluster-shared-compat-57.x86_64 0:5.7.32-31.47.1.el7 Percona-XtraDB-Cluster-test-57.x86_64 0:5.7.32-31.47.1.el7
percona-xtrabackup-24.x86_64 0:2.4.29-1.el7 qpress.x86_64 0:11-1.el7
作为依赖被安装:
keyutils-libs-devel.x86_64 0:1.5.8-3.el7 krb5-devel.x86_64 0:1.15.1-55.el7_9 libcom_err-devel.x86_64 0:1.42.9-19.el7
libev.x86_64 0:4.15-7.el7 libselinux-devel.x86_64 0:2.5-15.el7 libsepol-devel.x86_64 0:2.5-10.el7
libverto-devel.x86_64 0:0.2.5-4.el7 openssl-devel.x86_64 1:1.0.2k-26.el7_9 pcre-devel.x86_64 0:8.32-17.el7
perl-DBD-MySQL.x86_64 0:4.023-6.el7 perl-Digest.noarch 0:1.17-245.el7 perl-Digest-MD5.x86_64 0:2.52-3.el7
perl-Env.noarch 0:1.04-2.el7 perl-Test-Harness.noarch 0:3.28-3.el7 perl-Test-Simple.noarch 0:0.98-243.el7
zlib-devel.x86_64 0:1.2.7-21.el7_9
完毕!
配置集群
[root@k8s-master01 software]# cat /etc/my.cnf
#
# The Percona XtraDB Cluster 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
# Please make any edits and changes to the appropriate sectional files
# included below.
#
!includedir /etc/my.cnf.d/
!includedir /etc/percona-xtradb-cluster.conf.d/
[root@k8s-master01 software]# ls /etc/percona-xtradb-cluster.conf.d/
mysqld.cnf mysqld_safe.cnf wsrep.cnf
- mysqld.cnf : 这个文件通常包含 MySQL 服务器的常规配置选项。
-
wsrep.conf : 文件包含与 PXC 集群的复制和同步相关的所有设置。这些设置涵盖了集群的各个方面,包括节点通信、故障检测、数据同步等。
所有主机修改server的id为不同
vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
server-id=1
server-id=2
server-id=3
#运行远程访问的IP地址,必须要有这个,不然连接不上
bind-address = 0.0.0.0
所有节点修改node ip,node节点名即可
vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://192.168.126.21,192.168.126.22,192.168.126.23
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# Slave thread to use
wsrep_slave_threads= 8
wsrep_log_conflicts
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node IP address,改这里
wsrep_node_address=192.168.126.22
# Cluster name
wsrep_cluster_name=pxc-cluster
#If wsrep_node_name is not specified, then system hostname will be used
#修改节点名称为主机名
wsrep_node_name=k8s-worker01
#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING
# SST method
wsrep_sst_method=xtrabackup-v2
#Authentication for SST method,密码保持一致,第一个节点添加修改
wsrep_sst_auth="sstuser:s3cretPass"
mysql@bootstrap.service的命令
systemctl start mysql@bootstrap.service
systemctl stop mysql@bootstrap.service
systemctl restart mysql@bootstrap.service
systemctl status mysql@bootstrap.service
第一个节点启动,获取mysql初始密码,修改root密码
启动服务
[root@k8s-master01 software]systemctl start mysql@bootstrap.service
[root@k8s-master01 software]# cat /var/log/mysqld.log | grep "password"
2024-06-18T05:45:37.062216Z 1 [Note] A temporary password is generated for root@localhost: h8sXpispQ7+8
mysql -uroot -ph8sXpispQ7+8
alter user root@"localhost" identified by "123456"
GRANT all privileges ON *.* TO 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';
FLUSH PRIVILEGES;
其他节点启动mysql即可加入集群
systemctl start mysql
启动后自动同步初始节点的数据,root密码为123456
在每个节点进行查看
mysql -u root -p123456
mysql> show status Like "%wsrep%";
+----------------------------------+-------------------------------------------------------------+
| Variable_name | Value |
+----------------------------------+-------------------------------------------------------------+
| wsrep_local_state_uuid | 01f6d166-2d36-11ef-9781-f3dcdd213fc6 |
| wsrep_protocol_version | 9 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 13 |
| wsrep_received_bytes | 1570 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.076923 |
| wsrep_local_cached_downto | 0 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 173, 173 ] |
| wsrep_flow_control_interval_low | 173 |
| wsrep_flow_control_interval_high | 173 |
| wsrep_flow_control_status | OFF |
| wsrep_flow_control_active | false |
| wsrep_flow_control_requested | false |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 1592 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.126.21:3306,192.168.126.22:3306,192.168.126.23:3306 |
| wsrep_cluster_weight | 3 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | c0f73742-2d3c-11ef-95cd-9b4a1679ce5b |
| wsrep_gmcast_segment | 0 |
| wsrep_cluster_conf_id | 15 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | 01f6d166-2d36-11ef-9781-f3dcdd213fc6 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.47(raf7cd63) |
| wsrep_ready | ON |
+----------------------------------+-------------------------------------------------------------+
74 rows in set (0.00 sec)
参数名称 | 描述 | 值 |
---|---|---|
wsrep_local_state_comment | 本地节点的状态注释 | Synced |
wsrep_cluster_size | 集群中的节点数量 | 3 |
wsrep_cluster_status | 集群状态(Primary表示可以处理读写) | Primary |
wsrep_connected | 节点是否已连接到集群 | ON |
wsrep_ready | 节点是否准备好处理请求 | ON |
wsrep_local_index | 本地节点在集群中的索引(通常用于选举) | 0 |
wsrep_cluster_state_uuid | 集群的状态UUID,用于识别集群的当前状态 | 01f6d166-2d36-11ef-9781-f3dcdd213fc6 |
wsrep_incoming_addresses | 连接到此节点的其他节点的地址和端口 | 192.168.126.21:3306,192.168.126.22:3306,192.168.126.23:3306 |
wsrep_gcomm_uuid | 全局通信UUID,用于标识整个集群的通信层 | c0f73742-2d3c-11ef-95cd-9b4a1679ce5b |
wsrep_cluster_conf_id | 集群配置ID,表示集群配置更改的序号(如节点加入/离开) | 15 |
同步测试
在一个节点上创建连接账号,所有节点都可用此连接
GRANT all privileges ON *.* TO 'tpxc'@'%' IDENTIFIED BY '123456';
FLUSH PRIVILEGES;
新建数据库和表,自动同步
节点启动和停止顺序
正常停止
依次停止mysql服务,最后一个节点就是下次的启动引导节点
systemctl start mysql@bootstrap.service
其他节点加入
systemctl start mysql
强行关机,其他问题导致PXC 集群无法启动
查看/var/lib/mysql/grastate.dat都为
safe_to_bootstrap: 0
选择一个节点设置为1
vim /var/lib/mysql/grastate.dat
safe_to_bootstrap: 1
systemctl start mysql@bootstrap.service
错误及解决
mysql@bootstrap.service服务不正常
[root@k8s-master01 software]# systemctl status mysql@bootstrap.service
● mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap
Loaded: loaded (/usr/lib/systemd/system/mysql@.service; disabled; vendor preset: disabled)
Active: failed (Result: timeout) since 二 2024-06-18 14:51:54 CST; 31min ago
Process: 46869 ExecStopPost=/usr/bin/mysql-systemd stop-post (code=killed, signal=TERM)
Process: 45759 ExecStop=/usr/bin/mysql-systemd stop (code=exited, status=2)
Process: 45293 ExecStartPost=/usr/bin/mysql-systemd start-post $MAINPID (code=exited, status=1/FAILURE)
Process: 45292 ExecStart=/usr/bin/mysqld_safe --basedir=/usr ${EXTRA_ARGS} (code=exited, status=1/FAILURE)
Process: 46834 ExecStartPre=/usr/bin/mysql-systemd start-pre (code=exited, status=1/FAILURE)
Main PID: 45292 (code=exited, status=1/FAILURE)
6月 18 14:36:54 k8s-master01 systemd[1]: Starting Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap...
6月 18 14:36:54 k8s-master01 mysql-systemd[46834]: WARNING: PXC is in bootstrap mode. To switch to normal operation, first stop the mysql@bo... service
6月 18 14:36:54 k8s-master01 systemd[1]: mysql@bootstrap.service: control process exited, code=exited status=1
6月 18 14:51:54 k8s-master01 systemd[1]: mysql@bootstrap.service stop-post timed out. Terminating.
6月 18 14:51:54 k8s-master01 systemd[1]: Failed to start Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap.
6月 18 14:51:54 k8s-master01 systemd[1]: Unit mysql@bootstrap.service entered failed state.
6月 18 14:51:54 k8s-master01 systemd[1]: mysql@bootstrap.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
集群节点正常工作
查看 cat /var/lib/mysql/grastate.dat
所有节点的safe_to_bootstrap: 0
[root@k8s-worker01 pxc]# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 01f6d166-2d36-11ef-9781-f3dcdd213fc6
seqno: 12
safe_to_bootstrap: 0
依次停止mysql
systemctl stop mysql
再次查看,最后停止的值是1
[root@k8s-worker01 pxc]# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 01f6d166-2d36-11ef-9781-f3dcdd213fc6
seqno: 12
safe_to_bootstrap: 1
以该节点作为引导节点启动
systemctl start mysql@bootstrap.service
其他节点
systemctl start mysql