diff --git a/8.0/release-notes/8.0.36-28.html b/8.0/release-notes/8.0.36-28.html index 4a949671..b3e00a4f 100644 --- a/8.0/release-notes/8.0.36-28.html +++ b/8.0/release-notes/8.0.36-28.html @@ -2652,6 +2652,9 @@
PXC-4277: A three-node Percona XtraDB Cluster cluster was in an inconsistent state with ALTER .. ALGORITHM=INPLACE
.
Install Percona XtraDB Cluster
@@ -2673,7 +2676,7 @@This documentation is for the latest release: Percona XtraDB Cluster 8.0.36-28 (Release Notes).
Percona XtraDB Cluster is a database clustering solution for MySQL. It ensures high availability, prevents downtime and data loss, and provides linear scalability for a growing environment.
"},{"location":"index.html#features-of-percona-xtradb-cluster","title":"Features of Percona XtraDB Cluster","text":"Feature Details Synchronous replication Data is written to all nodes simultaneously, or not written at all in case of a failure even on a single node Multi-source replication Any node can trigger a data update. True parallel replication Multiple threads on replica performing replication on row level Automatic node provisioning You simply add a node and it automatically syncs. Data consistency No more unsynchronized nodes. PXC Strict Mode Avoids the use of tech preview features and unsupported features Configuration script for ProxySQL Percona XtraDB Cluster includes theproxysql-admin
tool that automatically configures Percona XtraDB Cluster nodes using ProxySQL. Automatic configuration of SSL encryption Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic
variable that enables automatic configuration of SSL encryption Optimized Performance Percona XtraDB Cluster performance is optimized to scale with a growing production workload Percona XtraDB Cluster 8.0 is fully compatible with MySQL Server Community Edition 8.0 and Percona Server for MySQL 8.0. The cluster has the following compatibilities:
Data - use the data created by any MySQL variant.
Application - no changes or minimal application changes are required for an application to use the cluster.
See also
Overview of changes in the most recent PXC release
Important changes in Percona XtraDB Cluster 8.0
MySQL Community Edition
Percona Server for MySQL
How We Made Percona XtraDB Cluster Scale
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"add-node.html","title":"Add nodes to cluster","text":"New nodes that are properly configured are provisioned automatically. When you start a node with the address of at least one other running node in the wsrep_cluster_address
variable, this node automatically joins and synchronizes with the cluster.
Note
Any existing data and configuration will be overwritten to match the data and configuration of the DONOR node. Do not join several nodes at the same time to avoid overhead due to large amounts of traffic when a new node joins.
Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer and the wsrep_sst_method
variable is always set to xtrabackup-v2
.
Start the second node using the following command:
[root@pxc2 ~]# systemctl start mysql\n
After the server starts, it receives SST automatically.
To check the status of the second node, run the following:
mysql@pxc2> show status like 'wsrep%';\n
Expected output +----------------------------------+--------------------------------------------------+\n| Variable_name | Value |\n+----------------------------------+--------------------------------------------------+\n| wsrep_local_state_uuid | a08247c1-5807-11ea-b285-e3a50c8efb41 |\n| ... | ... |\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n| ... | |\n| wsrep_cluster_size | 2 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n| ... | ... |\n| wsrep_provider_capabilities | :MULTI_MASTER:CERTIFICATION: ... |\n| wsrep_provider_name | Galera |\n| wsrep_provider_vendor | Codership Oy <info@codership.com> |\n| wsrep_provider_version | 4.3(r752664d) |\n| wsrep_ready | ON |\n| ... | ... | \n+----------------------------------+--------------------------------------------------+\n75 rows in set (0.00 sec)\n
The output of SHOW STATUS
shows that the new node has been successfully added to the cluster. The cluster size is now 2 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.
If the state of the second node is Synced
as in the previous example, then the node received full SST is synchronized with the cluster, and you can proceed to add the next node.
Note
If the state of the node is Joiner
, it means that SST hasn\u2019t finished. Do not add new nodes until all others are in Synced
state.
To add the third node, start it as usual:
[root@pxc3 ~]# systemctl start mysql\n
To check the status of the third node, run the following:
mysql@pxc3> show status like 'wsrep%';\n
The output shows that the new node has been successfully added to the cluster. Cluster size is now 3 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ... | ... |\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n| ... | ... |\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n| ... | ... |\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
"},{"location":"add-node.html#next-steps","title":"Next steps","text":"When you add all nodes to the cluster, you can verify replication by running queries and manipulating data on nodes to see if these changes are synchronized across the cluster.
"},{"location":"add-node.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"apparmor.html","title":"Enable AppArmor","text":"Percona XtraDB Cluster contains several AppArmor profiles. Multiple profiles allow for easier maintenance because the mysqld
profile is decoupled from the SST script profile. This separation allows the introduction of other SST methods or scripts with their own profiles.
The following profiles are available:
An extended version of the Percona Server profile which allows the execution of SST script.
An xtrabackup-v2 SST script profile located in /etc/apparmor.d/usr.bin.wsrep_sst_xtrabackup-v2
The mysqld
profile allows the execution of the SST script in PUx mode with the /{usr/}bin/wsrep_sst_*PUx command. The profile is applied if the script contains a profile. The SST script runs in unconfined mode if the script does not contain a profile. The system administrator can change the execution mode to Pix. This action causes a fallback to inherited mode in case the SST script profile is absent.
The mysqld
profile and the SST
script profile can be adjusted, such as moving the data directory, in the same way as modifying the mysqld profile in Percona Server.
pxc_encrypt_cluster_traffic
","text":"By default, the pxc_encrypt_cluster_traffic
is ON
, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.
Set up the certificates describes the certificate setup.
The following AppArmor profile rule grants access to certificates located in /etc/mysql/certs. You must be root or have sudo
privileges.
# Allow config access\n /etc/mysql/** r,\n
This rule is present in both profiles (usr.sbin.mysqld and usr.bin.wsrep_sst_xtrabackup-v2). The rule allows the administrator to store the certificates anywhere inside of the /etc/mysql/ directory. If the certificates are located outside of the specified directory, you must add an additional rule which allows access to the certificates in both profiles. The rule must have the path to the certificates location, like the following:
# Allow config access\n /path/to/certificates/* r,\n
The server certificates must be accessible to the mysql user and are readable only by this user.
"},{"location":"apparmor.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"apt.html","title":"Install Percona XtraDB Cluster on Debian or Ubuntu","text":"Specific information on the supported platforms, products, and versions is described in Percona Software and Platform Lifecycle.
The packages are available in the official Percona software repository and on the download page. It is recommended to install Percona XtraDB Cluster from the official repository using APT.
We gather Telemetry data in the Percona packages and Docker images.
"},{"location":"apt.html#prerequisites","title":"Prerequisites","text":"See also
For more information, see Enabling AppArmor.
"},{"location":"apt.html#install-from-repository","title":"Install from Repository","text":"Update the sytem:
sudo apt update\n
Install the necessary packages:
sudo apt install -y wget gnupg2 lsb-release curl\n
Download the repository package
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb\n
Install the package with dpkg
:
sudo dpkg -i percona-release_latest.generic_all.deb\n
Refresh the local cache to update the package information:
sudo apt update\n
Enable the release
repository for Percona XtraDB Cluster:
sudo percona-release setup pxc80\n
Install the cluster:
sudo apt install -y percona-xtradb-cluster\n
During the installation, you are requested to provide a password for the root
user on the database node.
Note
If needed, you could also install the percona-xtradb-cluster-full
meta-package, which includes the following additional packages:
libperconaserverclient21
libperconaserverclient21-dev
percona-xtradb-cluster
percona-xtradb-cluster-client
percona-xtradb-cluster-common
percona-xtradb-cluster-dbg
percona-xtradb-cluster-full
percona-xtradb-cluster-garbd
percona-xtradb-cluster-garbd-debug
percona-xtradb-cluster-server
percona-xtradb-cluster-server-debug
percona-xtradb-cluster-source
percona-xtradb-cluster-test
After you install Percona XtraDB Cluster and stop the mysql
service, configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"bootstrap.html","title":"Bootstrap the first node","text":"After you configure all PXC nodes, initialize the cluster by bootstrapping the first node. The initial node must contain all the data that you want to be replicated to other nodes.
Bootstrapping implies starting the first node without any known cluster addresses: if the wsrep_cluster_address
variable is empty, Percona XtraDB Cluster assumes that this is the first node and initializes the cluster.
Instead of changing the configuration, start the first node using the following command:
[root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
When you start the node using the previous command, it runs in bootstrap mode with wsrep_cluster_address=gcomm://
. This tells the node to initialize the cluster with wsrep_cluster_conf_id
variable set to 1
. After you add other nodes to the cluster, you can then restart this node as normal, and it will use standard configuration again.
Note
A service started with mysql@bootstrap
must be stopped using the same command. For example, the systemctl stop mysql
command does not stop an instance started with the mysql@bootstrap
command.
To make sure that the cluster has been initialized, run the following:
mysql@pxc1> show status like 'wsrep%';\n
The output shows that the cluster size is 1 node, it is the primary component, the node is in the Synced
state, it is fully connected and ready for write-set replication.
+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ... | ... |\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n| ... | ... |\n| wsrep_cluster_size | 1 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n| ... | ... |\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
"},{"location":"bootstrap.html#next-steps","title":"Next steps","text":"After initializing the cluster, you can add other nodes.
"},{"location":"bootstrap.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"certification.html","title":"Certification in Percona XtraDB Cluster","text":"Percona XtraDB Cluster replicates actions executed on one node to all other nodes in the cluster, and makes it fast enough to appear as if it is synchronous (virtually synchronous).
The following types of actions exist:
DDL actions are executed using Total Order Isolation (TOI). We can ignore Rolling Schema Upgrades (ROI).
DML actions are executed using normal Galera replication protocol.
Note
This manual page assumes the reader is aware of TOI and MySQL replication protocol.
DML (INSERT
, UPDATE
, and DELETE
) operations effectively change the state of the database, and all such operations are recorded in XtraDB by registering a unique object identifier (key) for each change (an update or a new addition).
append_key
operation. An append_key
operation registers the key of the data object that has undergone change by the transaction. The key for rows can be represented in three parts as db_name
, table_name
, and pk_columns_for_table
(if pk
is absent, a hash of the complete row is calculated).This ensures that there is quick and short meta information about the rows that this transaction has touched or modified. This information is passed on as part of the write-set for certification to all the nodes in the cluster while the transaction is in the commit phase.
For a transaction to commit, it has to pass XtraDB/Galera certification, ensuring that transactions don\u2019t conflict with any other changes posted on the cluster group/channel. Certification will add the keys modified by a given transaction to its own central certification vector (CCV), represented by cert_index_ng
. If the said key is already part of the vector, then conflict resolution checks are triggered.
Conflict resolution traces the reference transaction (that last modified this item in the cluster group). If this reference transaction is from some other node, that suggests the same data was modified by the other node, and changes of that node have been certified by the local node that is executing the check. In such cases, the transaction that arrived later fails to certify.
Changes made to database objects are bin-logged. This is similar to how MySQL does it for replication with its Source-Replica ecosystem, except that a packet of changes from a given transaction is created and named as a write-set.
Once the client/user issues a COMMIT
, Percona XtraDB Cluster will run a commit hook. Commit hooks ensure the following:
Flush the binary logs.
Check if the transaction needs replication (not needed for read-only transactions like SELECT
).
If a transaction needs replication, then it invokes a pre-commit hook in the Galera ecosystem. During this pre-commit hook, a write-set is written in the group channel by a replicate operation. All nodes (including the one that executed the transaction) subscribe to this group-channel and read the write-set.
gcs_recv_thread
is the first to receive the packet, which is then processed through different action handlers.
Each packet read from the group-channel is assigned an id
, which is a locally maintained counter by each node in sync with the group. When any new node joins the group/cluster, a seed-id for it is initialized to the current active id from group/cluster.
There is an inherent assumption/protocol enforcement that all nodes read the packet from a channel in the same order, and that way even though each packet doesn\u2019t carry id
information, it is inherently established using the locally maintained id
value.
The following example shows what happens in a common situation. act_id
is incremented and assigned only for totally ordered actions, and only in primary state (skip messages while in state exchange).
$ rcvd->id = ++group->act_id_;\n
Note
This is an amazing way to solve the problem of the id coordination in multi-source systems. Otherwise a node will have to first get an id from central system or through a separate agreed protocol, and then use it for the packet, thereby doubling the round-trip time.
"},{"location":"certification.html#conflicts","title":"Conflicts","text":"The following happens if two nodes get ready with their packet at same time:
Both nodes will be allowed to put the packet on the channel. That means the channel will see packets from different nodes queued one behind another.
The following example shows what happens if two nodes modify same set of rows. Nodes are in sync until this point:
$ create -> insert (1,2,3,4)\n
Node 1: update i = i + 10;
Node 2: update i = i + 100;
Let\u2019s associate transaction ID (trx-id
) for an update transaction that is executed on Node 1 and Node 2 in parallel. Although the real algorithm is more involved (with uuid
+ seqno
), it is conceptually the same, so we are using trx_id
.
Node 1: update action: trx-id=n1x
Node 2: update action: trx-id=n2x
Both node packets are added to the channel, but the transactions are conflicting. The protocol says: FIRST WRITE WINS.
So in this case, whoever is first to write to the channel will get certified. Let\u2019s say Node 2 is first to write the packet, and then Node 1 makes changes immediately after it.
Note
Each node subscribes to all packages, including its own package.
Node 2 will see its own packet and will process it. Then it will see the packet from Node 1, try to certify it, and fail.
Node 1 will see the packet from Node 2 and will process it.
Note
InnoDB allows isolation, so Node 1 can process packets from Node 2 independent of Node 1 transaction changes
Then Node 1 will see its own packet, try to certify it, and fail.
Note
Even though the packet originated from Node 1, it will undergo certification to catch cases like these.
The certification protocol can be described using the previous example. The central certification vector (CCV) is updated to reflect reference transaction.
n2x
.Node 2 then gets the packet from Node 1 for certification. The packet key is already present in CCV, with the reference transaction set it to n2x
, whereas write-set proposes setting it to n1x
. This causes a conflict, which in turn causes the transaction from Node 1 to fail the certification test.
n2x
.Using the same case as explained above, Node 1 certification also rejects the packet from Node 1.
This suggests that the node doesn\u2019t need to wait for certification to complete, but just needs to ensure that the packet is written to the channel. The applier transaction will always win and the local conflicting transaction will be rolled back.
The following example shows what happens if one of the nodes has local changes that are not synced with the group:
mysql> create (id primary key) -> insert (1), (2), (3), (4);\n
Expected output node-1: wsrep_on=0; insert (5); wsrep_on=1\nnode-2: insert(5).\n
The insert(5)
statement will generate a write-set that will then be replicated to Node 1. Node 1 will try to apply it but will fail with duplicate-key-error
, because 5 already exist.
XtraDB will flag this as an error, which would eventually cause Node 1 to shutdown.
"},{"location":"certification.html#increment-gtid","title":"Increment GTID","text":"GTID is incremented only when the transaction passes certification, and is ready for commit. That way errant packets don\u2019t cause GTID to increment.
Also, group packet id
is not confused with GTID. Without errant packets, it may seem that these two counters are the same, but they are not related.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"compile.html","title":"Compile and install from Source Code","text":"If you want to compile Percona XtraDB Cluster, you can find the source code on GitHub. Before you begin, make sure that the following packages are installed:
apt yum Gitgit
git
SCons scons
scons
GCC gcc
gcc
g++ g++
gcc-c++
OpenSSL openssl
openssl
Check check
check
CMake cmake
cmake
Bison bison
bison
Boost libboost-all-dev
boost-devel
Asio libasio-dev
asio-devel
Async I/O libaio-dev
libaio-devel
ncurses libncurses5-dev
ncurses-devel
Readline libreadline-dev
readline-devel
PAM libpam-dev
pam-devel
socat socat
socat
curl libcurl-dev
libcurl-devel
You will likely have all or most of the packages already installed. If you are not sure, run one of the following commands to install any missing dependencies:
For Debian or Ubuntu:
$ sudo apt install -y git scons gcc g++ openssl check cmake bison \\\nlibboost-all-dev libasio-dev libaio-dev libncurses5-dev libreadline-dev \\\nlibpam-dev socat libcurl-dev\n
For Red Hat Enterprise Linux or CentOS:
$ sudo yum install -y git scons gcc gcc-c++ openssl check cmake bison \\\nboost-devel asio-devel libaio-devel ncurses-devel readline-devel pam-devel \\\nsocat libcurl-devel\n
To compile Percona XtraDB Cluster from source code:
Clone the Percona XtraDB Cluster repository:
$ git clone https://github.com/percona/percona-xtradb-cluster.git\n
Important
Clone the latest repository or update it to the latest state. Old codebase may not be compatible with the build script.
Check out the 8.0
branch and initialize submodules:
$ cd percona-xtradb-cluster\n$ git checkout 8.0\n$ git submodule update --init --recursive\n
Download the matching Percona XtraBackup 8.0 tarball (*.tar.gz) for your operating system from Percona Downloads.
The following example extract the Percona XtraBackup 8.0.32-25 tar.gz file to the target directory ./pxc-build
:
```{.bash data-prompt=\"$\"}\n$ tar -xvf percona-xtrabackup-8.0.32-25-Linux-x86_64.glibc2.17.tar.gz -C ./pxc-build\n```\n
Run the build script ./build-ps/build-binary.sh
. By default, it attempts building into the current directory. Specify the target output directory, such as ./pxc-build
:
$ mkdir ./pxc-build\n$ ./build-ps/build-binary.sh ./pxc-build\n
When the compilation completes, pxc-build
contains a tarball, such as Percona-XtraDB-Cluster-8.0.x86_64.tar.gz
, that you can deploy on your system.
Note
The exact version and release numbers may differ.
"},{"location":"compile.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"configure-cluster-rhel.html","title":"Configure a cluster on Red Hat-based distributions","text":"This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Red Hat or CentOS 7 servers, using the packages from Percona repositories.
Node 1
Host name: percona1
IP address: 192.168.70.71
Node 2
Host name: percona2
IP address: 192.168.70.72
Node 3
Host name: percona3
IP address: 192.168.70.73
The procedure described in this tutorial requires the following:
All three nodes have Red Hat or Red Hat or CentOS 7 installed.
The firewall on all nodes is configured to allow connecting to ports 3306, 4444, 4567 and 4568.
SELinux on all nodes is disabled.
Different from previous versions
The variable wsrep_sst_auth
has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user mysql.pxc.internal.session
. During SST, the user mysql.pxc.sst.user
and the role mysql.pxc.sst.role
are created on the donor node.
Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux or CentOS.
"},{"location":"configure-cluster-rhel.html#step-2-configuring-the-first-node","title":"Step 2. Configuring the first node","text":"Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.
Make sure that the configuration file /etc/my.cnf
on the first node (percona1
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended.\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 1 address\nwsrep_node_address=192.168.70.71\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n
Start the first node with the following command:
[root@percona1 ~] # systemctl start mysql@bootstrap.service\n
The previous command will start the cluster with initial wsrep_cluster_address
variable set to gcomm://
. If the node or MySQL are restarted later, there will be no need to change the configuration file.
After the first node has been started, cluster status can be checked with the following command:
mysql> show status like 'wsrep%';\n
This output shows that the cluster has been successfully bootstrapped.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 1 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n
Copy the automatically generated temporary password for the superuser account:
$ sudo grep 'temporary password' /var/log/mysqld.log\n
Use this password to log in as root:
$ mysql -u root -p\n
Change the password for the superuser account and log out. For example:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'r00tP@$$';\n
Expected output Query OK, 0 rows affected (0.00 sec)\n
Make sure that the configuration file /etc/my.cnf
on the second node (percona2
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 2 address\nwsrep_node_address=192.168.70.72\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the second node with the following command:
[root@percona2 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can be checked on both nodes. The following is an example of status from the second node (percona2
):
mysql> show status like 'wsrep%';\n
The output shows that the new node has been successfully added to the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 2 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
Make sure that the MySQL configuration file /etc/my.cnf
on the third node (percona3
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.73\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the third node with the following command:
[root@percona3 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can be checked on all three nodes. The following is an example of status from the third node (percona3
):
mysql> show status like 'wsrep%';\n
The output confirms that the third node has joined the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
To test replication, lets create a new database on second node, create a table for that database on the third node, and add some records to the table on the first node.
Create a new database on the second node:
mysql@percona2> CREATE DATABASE percona;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Switch to a newly created database:
mysql@percona3> USE percona;\n
The following output confirms that a database has been changed:
Expected outputDatabase changed\n
Create a table on the third node:
mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n
The following output confirms that a table has been created:
Expected outputQuery OK, 0 rows affected (0.05 sec)\n
Insert records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n
The following output confirms that the records have been inserted:
Expected outputQuery OK, 1 row affected (0.02 sec)\n
Retrieve all the rows from that table on the second node:
mysql@percona2> SELECT * FROM percona.example;\n
The following output confirms that all the rows have been retrieved:
Expected output+---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n| 1 | percona1 |\n+---------+-----------+\n1 row in set (0.00 sec)\n
This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"configure-cluster-ubuntu.html","title":"Configure a cluster on Debian or Ubuntu","text":"This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Ubuntu 14 LTS servers, using the packages from Percona repositories.
Node 1
Host name: pxc1
IP address: 192.168.70.61
Node 2
Host name: pxc2
IP address: 192.168.70.62
Node 3
Host name: pxc3
IP address: 192.168.70.63
The procedure described in this tutorial requires he following:
All three nodes have Ubuntu 14 LTS installed.
Firewall on all nodes is configured to allow connecting to ports 3306, 4444, 4567 and 4568.
AppArmor profile for MySQL is disabled.
Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Debian or Ubuntu.
Note
Debian/Ubuntu installation prompts for root password. For this tutorial, set it to Passw0rd
. After the packages have been installed, mysqld
will start automatically. Stop mysqld
on all three nodes using sudo systemctl stop mysql
.
Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.
Make sure that the configuration file /etc/mysql/my.cnf
for the first node (pxc1
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #1 address\nwsrep_node_address=192.168.70.61\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n
Start the first node with the following command:
[root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
This command will start the first node and bootstrap the cluster.
After the first node has been started, cluster status can be checked with the following command:
mysql> show status like 'wsrep%';\n
The following outut shows the cluste status:
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 1 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n
This output shows that the cluster has been successfully bootstrapped.
To perform State Snapshot Transfer using XtraBackup, set up a new user with proper privileges:
mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';\nmysql@pxc1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';\nmysql@pxc1> FLUSH PRIVILEGES;\n
Note
MySQL root account can also be used for performing SST, but it is more secure to use a different (non-root) user for this.
"},{"location":"configure-cluster-ubuntu.html#step-3-configure-the-second-node","title":"Step 3. Configure the second node","text":"Make sure that the configuration file /etc/mysql/my.cnf
on the second node (pxc2
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #2 address\nwsrep_node_address=192.168.70.62\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the second node with the following command:
[root@pxc2 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of status from the second node (pxc2
):
mysql> show status like 'wsrep%';\n
The following output shows that the new node has been successfully added to the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 2 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
Make sure that the MySQL configuration file /etc/mysql/my.cnf
on the third node (pxc3
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.63\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the third node with the following command:
[root@pxc3 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can be checked on all nodes. The following is an example of status from the third node (pxc3
):
mysql> show status like 'wsrep%';\n
The following output confirms that the third node has joined the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
"},{"location":"configure-cluster-ubuntu.html#test-replication","title":"Test replication","text":"To test replication, lets create a new database on the second node, create a table for that database on the third node, and add some records to the table on the first node.
Create a new database on the second node:
mysql@percona2> CREATE DATABASE percona;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Switch to a newly created database:
mysql@percona3> USE percona;\n
The following output confirms that a database has been changed:
Expected outputDatabase changed\n
Create a table on the third node:
mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n
The following output confirms that a table has been created:
Expected outputQuery OK, 0 rows affected (0.05 sec)\n
Insert records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n
The following output confirms that the records have been inserted:
Expected outputQuery OK, 1 row affected (0.02 sec)\n
Retrieve all the rows from that table on the second node:
mysql@percona2> SELECT * FROM percona.example;\n
The following output confirms that all the rows have been retrieved:
Expected output+---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n| 1 | percona1 |\n+---------+-----------+\n1 row in set (0.00 sec)\n
This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"configure-nodes.html","title":"Configure nodes for write-set replication","text":"After installing Percona XtraDB Cluster on each node, you need to configure the cluster. In this section, we will demonstrate how to configure a three node cluster:
Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63Stop the Percona XtraDB Cluster server. After the installation completes the server is not started. You need this step if you have started the server manually.
$ sudo service mysql stop\n
Edit the configuration file of the first node to provide the cluster settings.
If you use Debian or Ubuntu, edit /etc/mysql/mysql.conf.d/mysqld.cnf
:
wsrep_provider=/usr/lib/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n
If you use Red Hat or CentOS, edit /etc/my.cnf
. Note that on these systems you set the wsrep_provider option to a different value:
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n
Configure node 1.
wsrep_node_name=pxc1\nwsrep_node_address=192.168.70.61\npxc_strict_mode=ENFORCING\n
Set up node 2 and node 3 in the same way: Stop the server and update the configuration file applicable to your system. All settings are the same except for wsrep_node_name
and wsrep_node_address
.
For node 2
wsrep_node_name=pxc2\nwsrep_node_address=192.168.70.62\n
For node 3
wsrep_node_name=pxc3\nwsrep_node_address=192.168.70.63\n
Set up the traffic encryption settings. Each node of the cluster must use the same SSL certificates.
[mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n
Important
In Percona XtraDB Cluster 8.0, the Encrypting Replication Traffic is enabled by default (via the pxc-encrypt-cluster-traffic
variable).
The replication traffic encryption cannot be enabled on a running cluster. If it was disabled before the cluster was bootstrapped, the cluster must to stopped. Then set up the encryption, and bootstrap (see Bootstrapping the First Node
) again.
See also
More information about the security settings in Percona XtraDB Cluster * Security Basics
* Encrypting PXC Traffic
* SSL Automatic Configuration
Here is an example of a full configuration file installed on CentOS to /etc/my.cnf
.
# Template my.cnf for PXC\n# Edit to your requirements.\n[client]\nsocket=/var/lib/mysql/mysql.sock\n[mysqld]\nserver-id=1\ndatadir=/var/lib/mysql\nsocket=/var/lib/mysql/mysql.sock\nlog-error=/var/log/mysqld.log\npid-file=/var/run/mysqld/mysqld.pid\n# Binary log expiration period is 604800 seconds, which equals 7 days\nbinlog_expire_logs_seconds=604800\n######## wsrep ###############\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n# Cluster connection URL contains IPs of nodes\n#If no IP is found, this implies that a new cluster needs to be created,\n#in order to do that you need to bootstrap this node\nwsrep_cluster_address=gcomm://\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n# Slave thread to use\nwsrep_slave_threads=8\nwsrep_log_conflicts\n# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n# Node IP address\n#wsrep_node_address=192.168.70.63\n# Cluster name\nwsrep_cluster_name=pxc-cluster\n#If wsrep_node_name is not specified, then system hostname will be used\nwsrep_node_name=pxc-cluster-node-1\n#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER\npxc_strict_mode=ENFORCING\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
"},{"location":"configure-nodes.html#next-steps-bootstrap-the-first-node","title":"Next Steps: Bootstrap the first node","text":"After you configure all your nodes, initialize Percona XtraDB Cluster by bootstrapping the first node according to the procedure described in Bootstrapping the First Node.
"},{"location":"configure-nodes.html#essential-configuration-variables","title":"Essential configuration variables","text":"wsrep_provider
Specify the path to the Galera library. The location depends on the distribution:
Debian and Ubuntu: /usr/lib/galera4/libgalera_smm.so
Red Hat and CentOS: /usr/lib64/galera4/libgalera_smm.so
wsrep_cluster_name
Specify the logical name for your cluster. It must be the same for all nodes in your cluster.
wsrep_cluster_address
Specify the IP addresses of nodes in your cluster. At least one is required for a node to join the cluster, but it is recommended to list addresses of all nodes. This way if the first node in the list is not available, the joining node can use other addresses.
Note
No addresses are required for the initial node in the cluster. However, it is recommended to specify them and properly bootstrap the first node. This will ensure that the node is able to rejoin the cluster if it goes down in the future.
wsrep_node_name
Specify the logical name for each individual node. If this variable is not specified, the host name will be used.
wsrep_node_address
Specify the IP address of this particular node.
wsrep_sst_method
By default, Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer. xtrabackup-v2
is the only supported option for this variable. This method requires a user for SST to be set up on the initial node.
pxc_strict_mode
PXC Strict Mode is enabled by default and set to ENFORCING
, which blocks the use of tech preview features and unsupported features in Percona XtraDB Cluster.
binlog_format
Galera supports only row-level replication, so set binlog_format=ROW
.
default_storage_engine
Galera fully supports only the InnoDB storage engine. It will not work correctly with MyISAM or any other non-transactional storage engines. Set this variable to default_storage_engine=InnoDB
.
innodb_autoinc_lock_mode
Galera supports only interleaved (2
) lock mode for InnoDB. Setting the traditional (0
) or consecutive (1
) lock mode can cause replication to fail due to unresolved deadlocks. Set this variable to innodb_autoinc_lock_mode=2
.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"copyright-and-licensing-information.html","title":"Copyright and licensing information","text":""},{"location":"copyright-and-licensing-information.html#documentation-licensing","title":"Documentation licensing","text":"Percona XtraDB Cluster documentation is (C)2009-2023 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.
"},{"location":"copyright-and-licensing-information.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"crash-recovery.html","title":"Crash recovery","text":"Unlike the standard MySQL replication, a PXC cluster acts like one logical entity, which controls the status and consistency of each node as well as the status of the whole cluster. This allows maintaining the data integrity more efficiently than with traditional asynchronous replication without losing safe writes on multiple nodes at the same time.
However, there are scenarios where the database service can stop with no node being able to serve requests.
"},{"location":"crash-recovery.html#scenario-1-node-a-is-gracefully-stopped","title":"Scenario 1: Node A is gracefully stopped","text":"In a three node cluster (node A, Node B, node C), one node (node A, for example) is gracefully stopped: for the purpose of maintenance, configuration change, etc.
In this case, the other nodes receive a \u201cgood bye\u201d message from the stopped node and the cluster size is reduced; some properties like quorum calculation or auto increment are automatically changed. As soon as node A is started again, it joins the cluster based on its wsrep_cluster_address
variable in my.cnf
.
If the writeset cache (gcache.size
) on nodes B and/or C still has all the transactions executed while node A was down, joining is possible via IST. If IST is impossible due to missing transactions in donor\u2019s gcache, the fallback decision is made by the donor and SST is started automatically.
Similar to Scenario 1: Node A is gracefully stopped, the cluster size is reduced to 1 \u2014 even the single remaining node C forms the primary component and is able to serve client requests. To get the nodes back into the cluster, you just need to start them.
However, when a new node joins the cluster, node C will be switched to the \u201cDonor/Desynced\u201d state as it has to provide the state transfer at least to the first joining node. It is still possible to read/write to it during that process, but it may be much slower, which depends on how large amount of data should be sent during the state transfer. Also, some load balancers may consider the donor node as not operational and remove it from the pool. So, it is best to avoid the situation when only one node is up.
If you restart node A and then node B, you may want to make sure note B does not use node A as the state transfer donor: node A may not have all the needed writesets in its gcache. Specify node C node as the donor in your configuration file and start the mysql service:
$ systemctl start mysql\n
See also
Galera Documentation: wsrep_sst_donor option
"},{"location":"crash-recovery.html#scenario-3-all-three-nodes-are-gracefully-stopped","title":"Scenario 3: All three nodes are gracefully stopped","text":"The cluster is completely stopped and the problem is to initialize it again. It is important that a PXC node writes its last executed position to the grastate.dat
file.
By comparing the seqno number in this file, you can see which is the most advanced node (most likely the last stopped). The cluster must be bootstrapped using this node, otherwise nodes that had a more advanced position will have to perform the full SST to join the cluster initialized from the less advanced one. As a result, some transactions will be lost). To bootstrap the first node, invoke the startup script like this:
$ systemctl start mysql@bootstrap.service\n
Note
Even though you bootstrap from the most advanced node, the other nodes have a lower sequence number. They will still have to join via the full SST because the Galera Cache is not retained on restart.
For this reason, it is recommended to stop writes to the cluster before its full shutdown, so that all nodes can stop at the same position. See also pc.recovery
.
This is the case when one node becomes unavailable due to power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, etc.
Two remaining nodes notice the connection to node A is down and start trying to re-connect to it. After several timeouts, node A is removed from the cluster. The quorum is saved (2 out of 3 nodes are up), so no service disruption happens. After it is restarted, node A joins automatically (as described in Scenario 1: Node A is gracefully stopped).
"},{"location":"crash-recovery.html#scenario-5-two-nodes-disappear-from-the-cluster","title":"Scenario 5: Two nodes disappear from the cluster","text":"Two nodes are not available and the remaining node (node C) is not able to form the quorum alone. The cluster has to switch to a non-primary mode, where MySQL refuses to serve any SQL queries. In this state, the mysqld process on node C is still running and can be connected to but any statement related to data fails with an error
> SELECT * FROM test.sbtest1;\n
The error message ERROR 1047 (08S01): WSREP has not yet prepared node for application use\n
Reads are possible until node C decides that it cannot access node A and node B. New writes are forbidden.
As soon as the other nodes become available, the cluster is formed again automatically. If node B and node C were just network-severed from node A, but they can still reach each other, they will keep functioning as they still form the quorum.
If node A and node B crashed, you need to enable the primary component on node C manually, before you can bring up node A and node B. The command to do this is:
> SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n
This approach only works if the other nodes are down before doing that! Otherwise, you end up with two clusters having different data.
See also
Adding Nodes to Cluster
"},{"location":"crash-recovery.html#scenario-6-all-nodes-went-down-without-a-proper-shutdown-procedure","title":"Scenario 6: All nodes went down without a proper shutdown procedure","text":"This scenario is possible in case of a datacenter power failure or when hitting a MySQL or Galera bug. Also, it may happen as a result of data consistency being compromised where the cluster detects that each node has different data. The grastate.dat
file is not updated and does not contain a valid sequence number (seqno). It may look like this:
$ cat /var/lib/mysql/grastate.dat\n# GALERA saved state\nversion: 2.1\nuuid: 220dcdcb-1629-11e4-add3-aec059ad3734\nseqno: -1\nsafe_to_bootstrap: 0\n
In this case, you cannot be sure that all nodes are consistent with each other. We cannot use safe_to_bootstrap variable to determine the node that has the last transaction committed as it is set to 0 for each node. An attempt to bootstrap from such a node will fail unless you start mysqld
with the --wsrep-recover
parameter:
$ mysqld --wsrep-recover\n
Search the output for the line that reports the recovered position after the node UUID (1122 in this case):
Expected output...\n... [Note] WSREP: Recovered position: 220dcdcb-1629-11e4-add3-aec059ad3734:1122\n...\n
The node where the recovered position is marked by the greatest number is the best bootstrap candidate. In its grastate.dat
file, set the safe_to_bootstrap variable to 1. Then, bootstrap from this node.
Note
After a shutdown, you can boostrap from the node which is marked as safe in the grastate.dat
file.
...\nsafe_to_bootstrap: 1\n...\n
See also
Galera Documentation Introducing the Safe-To-Bootstrap feature in Galera Cluster
In recent Galera versions, the option pc.recovery
(enabled by default) saves the cluster state into a file named gvwstate.dat
on each member node. As the name of this option suggests (pc \u2013 primary component), it saves only a cluster being in the PRIMARY state. An example content of the file may look like this:
cat /var/lib/mysql/gvwstate.dat\nmy_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9\n#vwbeg\nview_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3\nbootstrap: 0\nmember: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0\nmember: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0\nmember: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0\n#vwend\n
We can see a three node cluster with all members being up. Thanks to this new feature, the nodes will try to restore the primary component once all the members start to see each other. This makes the PXC cluster automatically recover from being powered down without any manual intervention! In the logs we will see:
"},{"location":"crash-recovery.html#scenario-7-the-cluster-loses-its-primary-state-due-to-split-brain","title":"Scenario 7: The cluster loses its primary state due to split brain","text":"For the purpose of this example, let\u2019s assume we have a cluster that consists of an even number of nodes: six, for example. Three of them are in one location while the other three are in another location and they lose network connectivity. It is best practice to avoid such topology: if you cannot have an odd number of real nodes, you can use an additional arbitrator (garbd) node or set a higher pc.weight to some nodes. But when the split brain happens any way, none of the separated groups can maintain the quorum: all nodes must stop serving requests and both parts of the cluster will be continuously trying to re-connect.
If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command as described in Scenario 5: Two nodes disappear from the cluster
> SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n
After this, you are able to work on the manually restored part of the cluster, and the other half should be able to automatically re-join using IST as soon as the network link is restored.
Warning
If you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring a network link in this case will not make them re-join until the nodes are restarted and members specified in configuration file are connected again.
Then, as the Galera replication model truly cares about data consistency: once the inconsistency is detected, nodes that cannot execute row change statement due to a data difference \u2013 an emergency shutdown will be performed and the only way to bring the nodes back to the cluster is via the full SST
Based on material from Percona Database Performance Blog
This article is based on the blog post Galera replication - how to recover a PXC cluster by Przemys\u0142aw Malkowski: https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/
"},{"location":"crash-recovery.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"data-at-rest-encryption.html","title":"Data at Rest Encryption","text":""},{"location":"data-at-rest-encryption.html#introduction","title":"Introduction","text":"Data at rest encryption refers to encrypting data stored on a disk on a server. If an unauthorized user accesses the data files from the file system, encryption ensures the user cannot read the file contents. Percona Server allows you to enable, disable, and apply encryptions to the following objects:
File-per-tablespace table
Schema
General tablespace
System tablespace
Temporary table
Binary log files
Redo log files
Undo tablespaces
Doublewrite buffer files
The transit data is defined as data that is transmitted to another node or client. Encrypted transit data uses an SSL connection.
Percona XtraDB Cluster 8.0 supports all data at rest generally-available encryption features available from Percona Server for MySQL 8.0.
"},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_file-plugin","title":"Configure PXC to use keyring_file plugin","text":""},{"location":"data-at-rest-encryption.html#configuration","title":"Configuration","text":"Percona XtraDB Cluster inherits the Percona Server for MySQL behavior to configure the keyring_file
plugin. The following example illustrates using the plugin. Review Use the kerying component or keyring plugin for the latest information on the keyring component and plugin.
Note
The keyring_file plugin should not be used for regulatory compliance.
Install the plugin and add the following options in the configuration file:
[mysqld]\nearly-plugin-load=keyring_file.so\nkeyring_file_data=<PATH>/keyring\n
The SHOW PLUGINS
statement checks if the plugin has been successfully loaded.
Note
PXC recommends the same configuration on all cluster nodes, and all nodes should have the keyring configured. A mismatch in the keyring configuration does not allow the JOINER node to join the cluster.
If the user has a bootstrapped node with keyring enabled, then upcoming cluster nodes inherit the keyring (the encrypted key) from the DONOR node.
"},{"location":"data-at-rest-encryption.html#usage","title":"Usage","text":"XtraBackup re-encrypts the data using a transition-key and the JOINER node re-encrypts it using a newly generated master-key.
Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible, as in higher version JOINER can join from lower version DONOR, but not vice-versa.
Percona XtraDB Cluster does not allow the combination of nodes with encryption and nodes without encryption to maintain data consistency. For example, the user creates node-1 with encryption (keyring) enabled and node-2 with encryption (keyring) disabled. If the user attempts to create a table with encryption on node-1, the creation fails on node-2, causing data inconsistency. A node fails to start if it fails to load the keyring plugin.
Note
If the user does not specify the keyring parameters, the node does not know that it must load the keyring. The JOINER node may start, but it eventually shuts down when the DML level inconsistency with encrypted tablespace is detected.
If a node does not have an encrypted tablespace, the keyring is not generated, and the keyring file is empty. Creating an encrypted table on the node generates the keyring.
In an operation that is local to the node, you can rotate the key as needed. The ALTER INSTANCE ROTATE INNODB MASTER KEY
statement is not replicated on cluster.
The JOINER node generates its keyring.
"},{"location":"data-at-rest-encryption.html#compatibility","title":"Compatibility","text":"Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible. A higher version JOINER can join from lower version DONOR, but not vice-versa.
"},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_vault-plugin","title":"Configure PXC to use keyring_vault plugin","text":""},{"location":"data-at-rest-encryption.html#keyring_vault","title":"keyring_vault","text":"The keyring_vault
plugin allows storing the master-key in vault-server (vs. local file as in case of keyring_file
).
Warning
The rsync tool does not support the keyring_vault
. Any rysnc-SST on a joiner is aborted if the keyring_vault
is configured.
Configuration options are the same as upstream. The my.cnf
configuration file should contain the following options:
[mysqld]\nearly-plugin-load=\"keyring_vault=keyring_vault.so\"\nkeyring_vault_config=\"<PATH>/keyring_vault_n1.conf\"\n
Also, keyring_vault_n1.conf
file should contain the following:
vault_url = http://127.0.0.1:8200\nsecret_mount_point = secret1\ntoken = e0345eb4-35dd-3ddd-3b1e-e42bb9f2525d\nvault_ca = /data/keyring_vault_confs/vault_ca.crt\n
The detailed description of these options can be found in the upstream documentation.
Vault-server is an external server, so make sure the PXC node can reach the server.
Note
Percona XtraDB Cluster recommends using the same keyring_plugin type on all cluster nodes. Mixing the keyring plugin types is recommended only while transitioning from keyring_file
-> keyring_vault
or vice-versa.
All nodes do not need to refer to the same vault server. Whatever vault server is used, it must be accessible from the respective node. All nodes do not need to use the same mount point.
If the node is not able to reach or connect to the vault server, an error is notified during the server boot, and the node refuses to start:
The warning message2018-05-29T03:54:33.859613Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:54:33.977145Z 0 [ERROR] Plugin keyring_vault reported:\n'CURL returned this error code: 7 with error message : Failed to connect\nto 127.0.0.1 port 8200: Connection refused'\n
If some nodes of the cluster are unable to connect to vault-server, this relates only to these specific nodes: e.g., if node-1 can connect, and node-2 cannot connect, only node-2 refuses to start. Also, if the server has a pre-existing encrypted object and on reboot, the server fails to connect to the vault-server, the object is not accessible.
In case when vault-server is accessible, but authentication credential is incorrect, the consequences are the same, and the corresponding error looks like the following:
The warning message2018-05-29T03:58:54.461911Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:58:54.577477Z 0 [ERROR] Plugin keyring_vault reported:\n'Could not retrieve list of keys from Vault. Vault has returned the\nfollowing error(s): [\"permission denied\"]'\n
In case of an accessible vault-server with the wrong mount point, there is no error during server boot, but the node still refuses to start:
mysql> CREATE TABLE t1 (c1 INT, PRIMARY KEY pk(c1)) ENCRYPTION='Y';\n
Expected output ERROR 3185 (HY000): Can't find master key from keyring, please check keyring\nplugin is loaded.\n\n... [ERROR] Plugin keyring_vault reported: 'Could not write key to Vault. ...\n... [ERROR] Plugin keyring_vault reported: 'Could not flush keys to keyring'\n
"},{"location":"data-at-rest-encryption.html#mix-keyring-plugin-types","title":"Mix keyring plugin types","text":"With XtraBackup introducing transition-key logic, it is now possible to mix and match keyring plugins. For example, the user has node-1 configured to use the keyring_file
plugin and node-2 configured to use keyring_vault
.
Note
Percona recommends the same configuration for all the nodes of the cluster. A mix and match in keyring plugin types is recommended only during the transition from one keying type to another.
"},{"location":"data-at-rest-encryption.html#temporary-file-encryption","title":"Temporary file encryption","text":""},{"location":"data-at-rest-encryption.html#migrate-keys-between-keyring-keystores","title":"Migrate keys between keyring keystores","text":"Percona XtraDB Cluster supports key migration between keystores. The migration can be performed offline or online.
"},{"location":"data-at-rest-encryption.html#offline-migration","title":"Offline migration","text":"In offline migration, the node to migrate is shut down, and the migration server takes care of migrating keys for the said server to a new keystore.
For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file
. To migrate the n2 node to use keyring_vault
, use the following procedure:
Shut down the n2 node.
Start the Migration Server (mysqld
with a special option).
The Migration Server copies the keys from the n2 keyring file and adds them to the vault server.
Start the n2 node with the vault parameter, and the keys are available.
Here is how the migration server output should look like:
Expected output/dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node2/keyring \\\n--keyring-migration-destination=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/vault/keyring_vault.cnf &\n\n... [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use\n --explicit_defaults_for_timestamp server option (see documentation for more details).\n... [Note] --secure-file-priv is set to NULL. Operations related to importing and\n exporting data are disabled\n... [Warning] WSREP: Node is not a cluster node. Disabling pxc_strict_mode\n... [Note] /dev/shm/pxc80/bin/mysqld (mysqld 8.0-debug) starting as process 5710 ...\n... [Note] Keyring migration successful.\n
On a successful migration, the destination keystore receives additional migrated keys (pre-existing keys in the destination keystore are not touched or removed). The source keystore retains the keys as the migration performs a copy operation and not a move operation.
If the migration fails, the destination keystore is unchanged.
"},{"location":"data-at-rest-encryption.html#online-migration","title":"Online migration","text":"In online migration, the node to migrate is kept running, and the migration server takes care of migrating keys for the said server to a new keystore by connecting to the node.
For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file
. Migrate the n3 node to use keyring_vault
using the following procedure:
Start the Migration Server (mysqld
with a special option).
The Migration Server copies the keys from the n3 keyring file and adds them to the vault server.
Restart the n3 node with the vault parameter, and the keys are available.
/dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/keyring_vault3.cnf \\\n--keyring-migration-destination=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node3/keyring \\\n--keyring-migration-host=localhost \\\n--keyring-migration-user=root \\\n--keyring-migration-port=16300 \\\n--keyring-migration-password='' &\n
On a successful migration, the destination keystore receives the additional migrated keys. Any pre-existing keys in the destination keystore are unchanged. The source keystore retains the keys as the migration performs a copy operation and not a move operation.
If the migration fails, the destination keystore is not changed.
"},{"location":"data-at-rest-encryption.html#migration-server-options","title":"Migration server options","text":"--keyring-migration-source
: The source keyring plugin that manages the keys to be migrated.
--keyring-migration-destination
: The destination keyring plugin to which the migrated keys are to be copied
Note
For offline migration, no additional key migration options are needed.
--keyring-migration-host
: The host where the running server is located. This host is always the local host.
--keyring-migration-user
, --keyring-migration-password
: The username and password for the account used to connect to the running server.
--keyring-migration-port
: Used for TCP/IP connections, the running server\u2019s port number used to connect.
--keyring-migration-socket
: Used for Unix socket file or Windows named pipe connections, the running server socket or named pipe used to connect.
Prerequisite for migration:
Make sure to pass required keyring options and other configuration parameters for the two keyring plugins. For example, if keyring_file
is one of the plugins, you must explicitly configure the keyring_file_data
system variable in the my.cnf file.
Other non-keyring options may be required as well. One way to specify these options is by using --defaults-file
to name an option file that contains the required options.
[mysqld]\nbasedir=/dev/shm/pxc80\ndatadir=/dev/shm/pxc80/copy_mig\nlog-error=/dev/shm/pxc80/logs/copy_mig.err\nsocket=/tmp/copy_mig.sock\nport=16400\n
See also
Encrypt traffic documentation
Percona Server for MySQL Documentation: Data-at-Rest Encryption https://www.percona.com/doc/percona-server/8.0/security/data-at-rest-encryption.html#data-at-rest-encryption
"},{"location":"data-at-rest-encryption.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"docker.html","title":"Running Percona XtraDB Cluster in a Docker Container","text":"Docker images of Percona XtraDB Cluster are hosted publicly on Docker Hub at https://hub.docker.com/r/percona/percona-xtradb-cluster/.
For more information about using Docker, see the Docker Docs. Make sure that you are using the latest version of Docker. The ones provided via apt
and yum
may be outdated and cause errors.
We gather Telemetry data in the Percona packages and Docker images.
Note
By default, Docker pulls the image from Docker Hub if the image is not available locally.
The image contains only the most essential binaries for Percona XtraDB Cluster to run. Some utilities included in a Percona Server for MySQL or MySQL installation might be missing from the Percona XtraDB Cluster Docker image.
The following procedure describes how to set up a simple 3-node cluster for evaluation and testing purposes. Do not use these instructions in a production environment because the MySQL certificates generated in this procedure are self-signed. For a production environment, you should generate and store the certificates to be used by Docker.
In this procedure, all of the nodes run Percona XtraDB Cluster 8.0 in separate containers on one host:
Create a ~/pxc-docker-test/config directory.
Create a custom.cnf file with the following contents, and place the file in the new directory:
[mysqld]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n\n[client]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/client-cert.pem\nssl-key = /cert/client-key.pem\n\n[sst]\nencrypt = 4\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n
Create a cert directory and generate self-signed SSL certificates on the host node:
$ mkdir -m 777 -p ~/pxc-docker-test/cert\ndocker run --name pxc-cert --rm -v ~/pxc-docker-test/cert:/cert\npercona/percona-xtradb-cluster:8.0 mysql_ssl_rsa_setup -d /cert\n
Create a Docker network:
$ docker network create pxc-network\n
Bootstrap the cluster (create the first node):
$ docker run -d \\\n -e MYSQL_ROOT_PASSWORD=test1234# \\\n -e CLUSTER_NAME=pxc-cluster1 \\\n --name=pxc-node1 \\\n --net=pxc-network \\\n -v ~/pxc-docker-test/cert:/cert \\\n -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n percona/percona-xtradb-cluster:8.0\n
Join the second node:
$ docker run -d \\\n -e MYSQL_ROOT_PASSWORD=test1234# \\\n -e CLUSTER_NAME=pxc-cluster1 \\\n -e CLUSTER_JOIN=pxc-node1 \\\n --name=pxc-node2 \\\n --net=pxc-network \\\n -v ~/pxc-docker-test/cert:/cert \\\n -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n percona/percona-xtradb-cluster:8.0\n
Join the third node:
$ docker run -d \\\n -e MYSQL_ROOT_PASSWORD=test1234# \\\n -e CLUSTER_NAME=pxc-cluster1 \\\n -e CLUSTER_JOIN=pxc-node1 \\\n --name=pxc-node3 \\\n --net=pxc-network \\\n -v ~/pxc-docker-test/cert:/cert \\\n -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n percona/percona-xtradb-cluster:8.0\n
To verify the cluster is available, do the following:
Access the MySQL client. For example, on the first node:
$ sudo docker exec -it pxc-node1 /usr/bin/mysql -uroot -ptest1234#\n
Expected output mysql: [Warning] Using a password on the command line interface can be insecure.\nWelcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 12\n...\nYou are enforcing ssl connection via unix socket. Please consider\nswitching ssl off as it does not make connection via unix socket\nany more secure\n\nmysql>\n
View the wsrep status variables:
mysql> show status like 'wsrep%';\n
Expected output +------------------------------+-------------------------------------------------+\n| Variable_name | Value |\n+------------------------------+-------------------------------------------------+\n| wsrep_local_state_uuid | 625318e2-9e1c-11e7-9d07-aee70d98d8ac |\n...\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_incoming_addresses | 172.18.0.2:3306,172.18.0.3:3306,172.18.0.4:3306 |\n...\n| wsrep_cluster_conf_id | 3 |\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_state_uuid | 625318e2-9e1c-11e7-9d07-aee70d98d8ac |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+------------------------------+-------------------------------------------------+\n59 rows in set (0.02 sec)\n
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"encrypt-traffic.html","title":"Encrypt PXC traffic","text":"There are two kinds of traffic in Percona XtraDB Cluster:
Client-server traffic (the one between client applications and cluster nodes),
Replication traffic, that includes SST, IST, write-set replication, and various service messages.
Percona XtraDB Cluster supports encryption for all types of traffic. Replication traffic encryption can be configured either automatically or manually.
"},{"location":"encrypt-traffic.html#encrypt-client-server-communication","title":"Encrypt client-server communication","text":"Percona XtraDB Cluster uses the underlying MySQL encryption mechanism to secure communication between client applications and cluster nodes.
MySQL generates default key and certificate files and places them in the data directory. You can override auto-generated files with manually created ones, as described in the section Generate keys and certificates manually.
The auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes.
Specify the following settings in the my.cnf
configuration file for each node:
[mysqld]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n\n[client]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/client-cert.pem\nssl-key=/etc/mysql/certs/client-key.pem\n
After it is restarted, the node uses these files to encrypt communication with clients. MySQL clients require only the second part of the configuration to communicate with cluster nodes.
MySQL generates the default key and certificate files and places them in the data directory. You can either use them or generate new certificates. For generation of new certificate please refer to Generate keys and certificates manually section.
"},{"location":"encrypt-traffic.html#encrypt-replication-traffic","title":"Encrypt replication traffic","text":"Replication traffic refers to the inter-node traffic which includes the SST traffic, IST traffic, and replication traffic.
The traffic of each type is transferred via a different channel, and so it is important to configure secure channels for all 3 variants to completely secure the replication traffic.
Percona XtraDB Cluster supports a single configuration option which helps to secure the complete replication traffic, and is often referred to as SSL automatic configuration. You can also configure the security of each channel by specifying independent parameters.
"},{"location":"encrypt-traffic.html#ssl-automatic-configuration","title":"SSL automatic configuration","text":"The automatic configuration of the SSL encryption needs a key and certificate files. MySQL generates a default key and certificate files and places them in the data directory.
Important
It is important that your cluster use the same SSL certificates on all nodes.
"},{"location":"encrypt-traffic.html#enable-pxc-encrypt-cluster-traffic","title":"Enablepxc-encrypt-cluster-traffic
","text":"Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic
variable that enables automatic configuration of SSL encryption there-by encrypting SST, IST, and replication traffic.
By default, pxc-encrypt-cluster-traffic
is enabled thereby using a secured channel for replication. This variable is not dynamic and so it cannot be changed at runtime.
Enabled, pxc-encrypt-cluster-traffic
has the effect of applying the following settings: encrypt, ssl_key, ssl-ca, ssl-cert.
Setting pxc-encrypt-cluster-traffic=ON
has the effect of applying the following settings in the my.cnf
configuration file:
[mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n
For wsrep_provider_options
, only the mentioned options are affected (socket.ssl_key
, socket,ssl_cert
, and socket.ssl_ca
), the rest is not modified.
Important
Disabling pxc-encrypt-cluster-traffic
The default value of pxc-encrypt-cluster-traffic
helps improve the security of your system.
When pxc-encrypt-cluster-traffic
is not enabled, anyone with the access to your network can connect to any PXC node either as a client or as another node joining the cluster. This potentially lets them query your data or get a complete copy of it.
If you must disable pxc-encrypt-cluster-traffic
, you need to stop the cluster and update [mysqld]
section of the configuration file: pxc-encrypt-cluster-traffic=OFF
of each node. Then, restart the cluster.
The automatic configuration of the SSL encryption needs key and certificate files. MySQL generates default key and certificate files and places them in data directory. These auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes. Also you can override auto-generated files with manually created ones, as covered in Generate keys and certificates manually.
The necessary key and certificate files are first searched at the ssl-ca
, ssl-cert
, and ssl-key
options under [mysqld]
. If these options are not set, the data directory is searched for ca.pem
, server-cert.pem
, and server-key.pem
files.
Note
The [sst]
section is not searched.
If all three files are found, they are used to configure encryption. If any of the files is missing, a fatal error is generated.
"},{"location":"encrypt-traffic.html#ssl-manual-configuration","title":"SSL manual configuration","text":"If user wants to enable encryption for specific channel only or use different certificates or other mix-match, then user can opt for manual configuration. This helps to provide more flexibility to end-users.
To enable encryption manually, the location of the required key and certificate files shoud be specified in the Percona XtraDB Cluster configuration. If you do not have the necessary files, see Generate keys and certificates manually.
Note
Encryption settings are not dynamic. To enable it on a running cluster, you need to restart the entire cluster.
There are three aspects of Percona XtraDB Cluster operation, where you can enable encryption:
Encrypt SST traffic
This refers to SST traffic during full data copy from one cluster node (donor) to the joining node (joiner).
Encrypt replication traffic
Encrypt IST traffic
This refers to all internal Percona XtraDB Cluster communication, such as, write-set replication, IST, and various service messages.
This refers to full data transfer that usually occurs when a new node (JOINER) joins the cluster and receives data from an existing node (DONOR).
For more information, see State snapshot transfer.
Note
If keyring_file
plugin is used, then SST encryption is mandatory: when copying encrypted data via SST, the keyring must be sent over with the files for decryption. In this case following options are to be set in my.cnf
on all nodes:
early-plugin-load=keyring_file.so\nkeyring-file-data=/path/to/keyring/file\n
The cluster will not work if keyring configuration across nodes is different.
The only available SST method is xtrabackup-v2
which uses Percona XtraBackup.
This is the only available SST method (the wsrep_sst_method
is always set to xtrabackup-v2
), which uses Percona XtraBackup to perform non-blocking transfer of files. For more information, see Percona XtraBackup SST Configuration.
Encryption mode for this method is selected using the encrypt
option:
encrypt=0
is the default value, meaning that encryption is disabled.
encrypt=4
enables encryption based on key and certificate files generated with OpenSSL. For more information, see Generating Keys and Certificates Manually.
To enable encryption for SST using XtraBackup, specify the location of the keys and certificate files in the each node\u2019s configuration under [sst]
:
[sst]\nencrypt=4\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n
Note
SSL clients require DH parameters to be at least 1024 bits, due to the logjam vulnerability. However, versions of socat
earlier than 1.7.3 use 512-bit parameters. If a dhparams.pem
file of required length is not found during SST in the data directory, it is generated with 2048 bits, which can take several minutes. To avoid this delay, create the dhparams.pem
file manually and place it in the data directory before joining the node to the cluster:
$ openssl dhparam -out /path/to/datadir/dhparams.pem 2048\n
For more information, see this blog post.
"},{"location":"encrypt-traffic.html#encrypt-replicationist-traffic","title":"Encrypt replication/IST traffic","text":"Replication traffic refers to the following:
Write-set replication which is the main workload of Percona XtraDB Cluster (replicating transactions that execute on one node to all other nodes).
Incremental State Transfer (IST) which is copying only missing transactions from DONOR to JOINER node.
Service messages which ensure that all nodes are synchronized.
All this traffic is transferred via the same underlying communication channel (gcomm
). Securing this channel will ensure that IST traffic, write-set replication, and service messages are encrypted. (For IST, a separate channel is configured using the same configuration parameters, so 2 sections are described together).
To enable encryption for all these processes, define the paths to the key, certificate and certificate authority files using the following wsrep provider options:
socket.ssl_ca
socket.ssl_cert
socket.ssl_key
To set these options, use the wsrep_provider_options
variable in the configuration file:
$ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/ca.pem;socket.ssl_cert=/etc/mysql/certs/server-cert.pem;socket.ssl_key=/etc/mysql/certs/server-key.pem\"\n
Note
You must use the same key and certificate files on all nodes, preferably those used for Encrypt client-server communication.
Check upgrade-certificate section on how to upgrade existing certificates.
"},{"location":"encrypt-traffic.html#generate-keys-and-certificates-manually","title":"Generate keys and certificates manually","text":"As mentioned above, MySQL generates default key and certificate files and places them in the data directory. If you want to override these certificates, the following new sets of files can be generated:
Certificate Authority (CA) key and certificate to sign the server and client certificates.
Server key and certificate to secure database server activity and write-set replication traffic.
Client key and certificate to secure client communication traffic.
These files should be generated using OpenSSL.
Note
The Common Name
value used for the server and client keys and certificates must differ from that value used for the CA certificate.
The Certificate Authority is used to verify the signature on certificates.
Generate the CA key file:
$ openssl genrsa 2048 > ca-key.pem\n
Generate the CA certificate file:
$ openssl req -new -x509 -nodes -days 3600\n -key ca-key.pem -out ca.pem\n
Generate the server key file:
$ openssl req -newkey rsa:2048 -days 3600 \\\n -nodes -keyout server-key.pem -out server-req.pem\n
Remove the passphrase:
$ openssl rsa -in server-key.pem -out server-key.pem\n
Generate the server certificate file:
$ openssl x509 -req -in server-req.pem -days 3600 \\\n -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n -out server-cert.pem\n
Generate the client key file:
$ openssl req -newkey rsa:2048 -days 3600 \\\n -nodes -keyout client-key.pem -out client-req.pem\n
Remove the passphrase:
$ openssl rsa -in client-key.pem -out client-key.pem\n
Generate the client certificate file:
$ openssl x509 -req -in client-req.pem -days 3600 \\\n -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n -out client-cert.pem\n
To verify that the server and client certificates are correctly signed by the CA certificate, run the following command:
$ openssl verify -CAfile ca.pem server-cert.pem client-cert.pem\n
If the verification is successful, you should see the following output:
server-cert.pem: OK\nclient-cert.pem: OK\n
"},{"location":"encrypt-traffic.html#failed-validation-caused-by-matching-cn","title":"Failed validation caused by matching CN","text":"Sometimes, an SSL configuration may fail if the certificate and the CA files contain the same .
To check if this is the case run openssl
command as follows and verify that the CN field differs for the Subject and Issuer lines.
$ openssl x509 -in server-cert.pem -text -noout\n
Incorrect values
Certificate:\nData:\nVersion: 1 (0x0)\nSerial Number: 1 (0x1)\nSignature Algorithm: sha256WithRSAEncryption\nIssuer: CN=www.percona.com, O=Database Performance., C=US\n...\nSubject: CN=www.percona.com, O=Database Performance., C=AU\n...\n
To obtain a more compact output run openssl
specifying -subject and -issuer parameters:
$ openssl x509 -in server-cert.pem -subject -issuer -noout\n
Expected output subject= /CN=www.percona.com/O=Database Performance./C=AU\nissuer= /CN=www.percona.com/O=Database Performance./C=US\n
"},{"location":"encrypt-traffic.html#deploy-keys-and-certificates","title":"Deploy keys and certificates","text":"Use a secure method (for example, scp
or sftp
) to send the key and certificate files to each node. Place them under the /etc/mysql/certs/
directory or similar location where you can find them later.
Note
Make sure that this directory is protected with proper permissions. Most likely, you only want to give read permissions to the user running mysqld
.
The following files are required:
ca.pem
)This file is used to verify signatures.
server-key.pem
and server-cert.pem
)These files are used to secure database server activity and write-set replication traffic.
client-key.pem
and client-cert.pem
)These files are required only if the node should act as a MySQL client. For example, if you are planning to perform SST using mysqldump
.
Note
Upgrade certificates subsection covers the details on upgrading certificates, if necessary.
"},{"location":"encrypt-traffic.html#upgrade-certificates","title":"Upgrade certificates","text":"The following procedure shows how to upgrade certificates used for securing replication traffic when there are two nodes in the cluster.
Restart the first node with the socket.ssl_ca
option set to a combination of the the old and new certificates in a single file.
For example, you can merge contents of old-ca.pem
and new-ca.pem
into upgrade-ca.pem
as follows:
$ cat old-ca.pem > upgrade-ca.pem && \\\ncat new-ca.pem >> upgrade-ca.pem\n
Set the wsrep_provider_options
variable as follows:
$ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/upgrade-ca.pem;socket.ssl_cert=/etc/mysql/certs/old-cert.pem;socket.ssl_key=/etc/mysql/certs/old-key.pem\"\n
Restart the second node with the socket.ssl_ca
, socket.ssl_cert
, and socket.ssl_key
options set to the corresponding new certificate files.
$ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/new-ca.pem;socket.ssl_cert=/etc/mysql/certs/new-cert.pem;socket.ssl_key=/etc/mysql/certs/new-key.pem\"\n
Restart the first node with the new certificate files, as in the previous step.
You can remove the old certificate files.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"failover.html","title":"Cluster failover","text":"Cluster membership is determined simply by which nodes are connected to the rest of the cluster; there is no configuration setting explicitly defining the list of all possible cluster nodes. Therefore, every time a node joins the cluster, the total size of the cluster is increased and when a node leaves (gracefully) the size is decreased.
The size of the cluster is used to determine the required votes to achieve quorum. A quorum vote is done when a node or nodes are suspected to no longer be part of the cluster (they do not respond). This no response timeout is the evs.suspect_timeout
setting in the wsrep_provider_options
(default 5 sec), and when a node goes down ungracefully, write operations will be blocked on the cluster for slightly longer than that timeout.
Once a node (or nodes) is determined to be disconnected, then the remaining nodes cast a quorum vote, and if the majority of nodes from before the disconnect are still still connected, then that partition remains up. In the case of a network partition, some nodes will be alive and active on each side of the network disconnect. In this case, only the quorum will continue. The partition(s) without quorum will change to non-primary state.
As a consequence, it\u2019s not possible to have safe automatic failover in a 2 node cluster, because failure of one node will cause the remaining node to become non-primary. Moreover, any cluster with an even number of nodes (say two nodes in two different switches) have some possibility of a split brain situation, when neither partition is able to retain quorum if connection between them is lost, and so they both become non-primary.
Therefore, for automatic failover, the rule of 3s is recommended. It applies at various levels of your infrastructure, depending on how far the cluster is spread out to avoid single points of failure. For example:
A cluster on a single switch should have 3 nodes
A cluster spanning switches should be spread evenly across at least 3 switches
A cluster spanning networks should span at least 3 networks
A cluster spanning data centers should span at least 3 data centers
These rules will prevent split brain situations and ensure automatic failover works correctly.
"},{"location":"failover.html#use-an-arbitrator","title":"Use an arbitrator","text":"If it is too expensive to add a third node, switch, network, or datacenter, you should use an arbitrator. An arbitrator is a voting member of the cluster that can receive and relay replication, but it does not persist any data, and runs its own daemon instead of mysqld
. Placing even a single arbitrator in a 3rd location can add split brain protection to a cluster that is spread across only two nodes/locations.
It is important to note that the rule of 3s applies only to automatic failover. In the event of a 2-node cluster (or in the event of some other outage that leaves a minority of nodes active), the failure of one node will cause the other to become non-primary and refuse operations. However, you can recover the node from non-primary state using the following command:
SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n
This will tell the node (and all nodes still connected to its partition) that it can become a primary cluster. However, this is only safe to do when you are sure there is no other partition operating in primary as well, or else Percona XtraDB Cluster will allow those two partitions to diverge (and you will end up with two databases that are impossible to re-merge automatically).
For example, assume there are two data centers, where one is primary and one is for disaster recovery, with an even number of nodes in each. When an extra arbitrator node is run only in the primary data center, the following high availability features will be available:
Auto-failover of any single node or nodes within the primary or secondary data center
Failure of the secondary data center would not cause the primary to go down (because of the arbitrator)
Failure of the primary data center would leave the secondary in a non-primary state.
If a disaster-recovery failover has been executed, you can tell the secondary data center to bootstrap itself with a single command, but disaster-recovery failover remains in your control.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"faq.html","title":"Frequently asked questions","text":""},{"location":"faq.html#how-do-i-report-bugs","title":"How do I report bugs?","text":"All bugs can be reported on JIRA. Please submit error.log
files from all the nodes.
For auto-increment,\u00a0Percona XtraDB Cluster changes auto_increment_offset
for each new node. In a single-node workload, locking is handled in the same way as InnoDB. In case of write load on several nodes, Percona XtraDB Cluster uses optimistic locking and the application may receive lock error in response to COMMIT
query.
When a node crashes, after restarting, it will copy the whole dataset from another\u00a0node (if there were changes to data since the crash).
"},{"location":"faq.html#how-can-i-check-the-galera-node-health","title":"How can I check the Galera node health?","text":"To check the health of a Galera node, use the following query:
mysql> SELECT 1 FROM dual;\n
The following results of the previous query are possible:
You get the row with id=1
(node is healthy)
Unknown error (node is online, but Galera is not connected/synced with the cluster)
Connection error (node is not online)
You can also check a node\u2019s health with the clustercheck
script. First set up the clustercheck
user:
mysql> CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD\n'*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';\n
Expected output Query OK, 0 rows affected (0.00 sec)\n
mysql> GRANT PROCESS ON *.* TO 'clustercheck'@'localhost';\n
You can then check a node\u2019s health by running the clustercheck
script:
$ /usr/bin/clustercheck clustercheck password 0\n
If the node is running, you should get the following status:
HTTP/1.1 200 OK\nContent-Type: text/plain\nConnection: close\nContent-Length: 40\n\nPercona XtraDB Cluster Node is synced.\n
In case node isn\u2019t synced or if it is offline, status will look like:
HTTP/1.1 503 Service Unavailable\nContent-Type: text/plain\nConnection: close\nContent-Length: 44\n\nPercona XtraDB Cluster Node is not synced.\n
Note
The clustercheck
script has the following syntax:
<user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>
Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local
Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local
Percona XtraDB Cluster populates write set in memory before replication, and this sets the limit for the size of transactions that make sense. There are wsrep variables for maximum row count and maximum size of write set to make sure that the server does not run out of memory.
"},{"location":"faq.html#is-it-possible-to-have-different-table-structures-on-the-nodes","title":"Is it possible to have different table structures on the nodes?","text":"For example, if there are four nodes, with four tables: sessions_a
, sessions_b
, sessions_c
, and sessions_d
, and you want each table in a separate node, this is not possible for InnoDB tables. However, it will work for MEMORY tables.
The quorum mechanism in\u00a0Percona XtraDB Cluster will decide which nodes can accept traffic and will shut down the nodes that do not belong to the quorum. Later when the failure is fixed, the nodes will need to copy data from the working cluster.
The algorithm for quorum is Dynamic Linear Voting (DLV). The quorum is preserved if (and only if) the sum weight of the nodes in a new component strictly exceeds half that of the preceding Primary Component, minus the nodes which left gracefully.
The mechanism is described in detail in Galera documentation.
"},{"location":"faq.html#how-would-the-quorum-mechanism-handle-split-brain","title":"How would the quorum mechanism handle split brain?","text":"The quorum mechanism cannot handle split brain. If there is no way to decide on the primary component, Percona XtraDB Cluster has no way to resolve a split brain. The minimal recommendation is to have 3 nodes. However, it is possibile to allow a node to handle traffic with the following option:
wsrep_provider_options=\"pc.ignore_sb = yes\"\n
"},{"location":"faq.html#why-a-node-stops-accepting-commands-if-the-other-one-fails-in-a-2-node-setup","title":"Why a node stops accepting commands if the other one fails in a 2-node setup?","text":"This is expected behavior to prevent split brain. For more information, see previous question or Galera documentation.
"},{"location":"faq.html#is-it-possible-to-set-up-a-cluster-without-state-transfer","title":"Is it possible to set up a cluster without state transfer?","text":"It is possible in two ways:
By default, Galera reads starting position from a text file <datadir>/grastate.dat
. Make this file identical on all nodes, and there will be no state transfer after starting a node.
Use the wsrep_start_position
variable to start the nodes with the same UUID:seqno
value.
You may need to open up to four ports if you are using a firewall:
Regular MySQL port (default is 3306).
Port for group communication (default is 4567). It can be changed using the following option:
wsrep_provider_options =\"gmcast.listen_addr=tcp://0.0.0.0:4010; \"\n
Port for State Snaphot Transfer (default is 4444). It can be changed using the following option:
wsrep_sst_receive_address=10.11.12.205:5555\n
Port for Incremental State Transfer (default is port for group communication + 1 or 4568). It can be changed using the following option:
wsrep_provider_options = \"ist.recv_addr=10.11.12.206:7777; \"\n
Percona XtraDB Cluster does not support \u201casync\u201d mode, all commits are synchronous on all nodes. To be precise, the commits are \u201cvirtually\u201d synchronous, which means that the transaction should pass certification on nodes, not physical commit. Certification means a guarantee that the transaction does not have conflicts with other transactions on the corresponding node.
"},{"location":"faq.html#does-it-work-with-regular-mysql-replication","title":"Does it work with regular MySQL replication?","text":"Yes. On the node you are going to use as source, you should enable log-bin
and log-slave-update
options.
Try to disable SELinux with the following command:
$ echo 0 > /selinux/enforce\n
"},{"location":"faq.html#what-does-nc-invalid-option-d-in-the-ssterr-log-file-mean","title":"What does \u201cnc: invalid option \u2013 \u2018d\u2019\u201d in the sst.err log file mean?","text":"This error is specific to Debian and Ubuntu. Percona XtraDB Cluster uses netcat-openbsd
package. This dependency has been fixed. Future releases of Percona XtraDB Cluster will be compatible with any netcat
(see bug PXC-941).
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"garbd-howto.html","title":"Set up Galera arbitrator","text":"The size of a cluster increases when a node joins the cluster and decreases when a node leaves. A cluster reacts to replication problems with inconsistency voting. The size of the cluster determines the required votes to achieve a quorum. If a node no longer responds and is disconnected from the cluster the remaining nodes vote. The majority of the nodes that vote are considered to be in the cluster.
The arbitrator is important if you have an even number of nodes remaining in the cluster. The arbitrator keeps the number of nodes as an odd number, which avoids the split-brain situation.
A Galera Arbitrator is a lightweight member of a Percona XtraDB Cluster. This member can vote but does not do any replication and is not included in flow control calculations. The Galera Arbitrator is a separate daemon called garbd
. You can start this daemon separately from the cluster and run this daemon either as a service or from the shell. You cannot configure this daemon using the my.cnf
file.
Note
For more information on how to set up a cluster you can read in the Configuring Percona XtraDB Cluster on Ubuntu or Configuring Percona XtraDB Cluster on CentOS manuals.
"},{"location":"garbd-howto.html#installation","title":"Installation","text":"Galera Arbitrator does not need a dedicated server and can be installed on a machine running other applications. The server must have good network connectivity.
Galera Arbitrator can be installed from Percona\u2019s repository on Debian/Ubuntu distributions with the following command:
root@ubuntu:~# apt install percona-xtradb-cluster-garbd\n
Galera Arbitrator can be installed from Percona\u2019s repository on RedHat or derivative distributions with the following command:
[root@centos ~]# yum install percona-xtradb-cluster-garbd\n
"},{"location":"garbd-howto.html#start-garbd-and-configuration","title":"Start garbd
and configuration","text":"Note
On Percona XtraDB Cluster 8.0, SSL is enabled by default. To run the Galera Arbitrator, you must copy the SSL certificates and configure garbd
to use the certificates.
It is necessary to specify the cipher. In this example, it is AES128-SHA256
. If you do not specify the cipher, an error occurs with a \u201cTerminate called after throwing an instance of \u2018gnu::NotSet\u2019\u201d message.
For more information, see socket.ssl_cipher
When starting from the shell, you can set the parameters from the command line or edit the configuration file. This is an example of starting from the command line:
$ garbd --group=my_ubuntu_cluster \\\n--address=\"gcomm://192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\" \\\n--option=\"socket.ssl=YES; socket.ssl_key=/etc/ssl/mysql/server-key.pem; \\\nsocket.ssl_cert=/etc/ssl/mysql/server-cert.pem; \\\nsocket.ssl_ca=/etc/ssl/mysql/ca.pem; \\\nsocket.ssl_cipher=AES128-SHA256\"\n
To avoid entering the options each time you start garbd
, edit the options in the configuration file. To configure Galera Arbitrator on Ubuntu/Debian, edit the /etc/default/garb
file. On RedHat or derivative distributions, the configuration can be found in /etc/sysconfig/garb
file.
The configuration file should look like this after the installation and before you have added your parameters:
# Copyright (C) 2013-2015 Codership Oy\n# This config file is to be sourced by garb service script.\n\n# REMOVE THIS AFTER CONFIGURATION\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\n# GALERA_NODES=\"\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\n# GALERA_GROUP=\"\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"\"\n
Add the parameter information about the cluster. For this document, we use the cluster information from Configuring Percona XtraDB Cluster on Ubuntu.
Note
Please note that you need to remove the # REMOVE THIS AFTER CONFIGURATION
line before you can start the service.
# This config file is to be sourced by garb service script.\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\nGALERA_NODES=\"192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\nGALERA_GROUP=\"my_ubuntu_cluster\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"socket.ssl_cert=/etc/ssl/mysql/server-key.pem;socket./etc/ssl/mysql/server-key.pem\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"/var/log/garbd.log\"\n
You can now start the Galera Arbitrator daemon (garbd
) by running:
root@server:~# service garbd start\n
Expected output [ ok ] Starting /usr/bin/garbd: :.\n
Note
On systems that run systemd
as the default system and service manager, use systemctl
instead of service
to invoke the command. Currently, both are supported.
root@server:~# systemctl start garb\n
root@server:~# service garb start\n
Expected output [ ok ] Starting /usr/bin/garbd: :.\n
Additionally, you can check the arbitrator
status by running:
root@server:~# service garbd status\n
Expected output [ ok ] garb is running.\n
root@server:~# service garb status\n
Expected output [ ok ] garb is running.\n
"},{"location":"garbd-howto.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"gcache-record-set-cache-difference.html","title":"Understand GCache and Record-Set cache","text":"In Percona XtraDB Cluster, there is a concept of GCache and Record-Set cache (which can also be called transaction write-set cache). The use of these two caches is often confusing if you are running long transactions, because both of them result in the creation of disk-level files. This manual describes what their main differences are.
"},{"location":"gcache-record-set-cache-difference.html#record-set-cache","title":"Record-Set cache","text":"When you run a long-running transaction on any particular node, it will try to append a key for each row that it tries to modify (the key is a unique identifier for the row {db,table,pk.columns}
). This information is cached in out-write-set, which is then sent to the group for certification.
Keys are cached in HeapStore (which has page-size=64K
and total-size=4MB
). If the transaction data-size outgrows this limit, then the storage is switched from Heap to Page (which has page-size=64MB
and total-limit=free-space-on-disk
).
All these limits are non-configurable, but having a memory-page size greater than 4MB per transaction can cause things to stall due to memory pressure, so this limit is reasonable. This is another limitation to address when Galera supports large transaction.
The same long-running transaction will also generate binlog data that also appends to out-write-set on commit (HeapStore->FileStore
). This data can be significant, as it is a binlog image of rows inserted/updated/deleted by the transaction. The wsrep_max_ws_size
variable controls the size of this part of the write-set. The threshold doesn\u2019t consider size allocated for caching-keys and the header.
If FileStore
is used, it creates a file on the disk (with names like xxxx_keys
and xxxx_data
) to store the cache data. These files are kept until a transaction is committed, so the lifetime of the transaction is linked.
When the node is done with the transaction and is about to commit, it will generate the final-write-set using the two files (if the data size grew enough to use FileStore
) plus HEADER
, and will publish it for certification to cluster.
The native node executing the transaction will also act as subscription node, and will receive its own write-set through the cluster publish mechanism. This time, the native node will try to cache write-set into its GCache. How much data GCache retains is controlled by the GCache configuration.
"},{"location":"gcache-record-set-cache-difference.html#gcache","title":"GCache","text":"GCache holds the write-set published on the cluster for replication. The lifetime of write-set in GCache is not transaction-linked.
When a JOINER
node needs an IST, it will be serviced through this GCache (if possible).
GCache will also create the files to disk. You can read more about it here.
At any given point in time, the native node has two copies of the write-set: one in GCache and another in Record-Set Cache.
For example, lets say you INSERT/UPDATE
2 million rows in a table with the following schema.
(int, char(100), char(100) with pk (int, char(100))\n
It will create write-set key/data files in the background similar to the following:
-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000000\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000001\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000002\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_keys.000000\n
"},{"location":"gcache-record-set-cache-difference.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"gcache-write-set-cache-encryption.html","title":"GCache encryption and Write-Set cache encryption","text":"These features are tech preview. Before using these features in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.
"},{"location":"gcache-write-set-cache-encryption.html#gcache-and-write-set-cache-encryption","title":"GCache and Write-Set cache encryption","text":"Enabling this feature encrypts the Galera GCache and Write-Set cache files with a File Key.
GCache has a RingBuffer on-disk file to manage write-sets. The keyring only stores the Master Key which is used to encrypt the File Key used by the RingBuffer file. The encrypted File Key is stored in the RingBuffer\u2019s preamble. The RingBuffer file of GCache is non-volatile, which means this file survives a restart. The File Key is not stored for GCache off-pages and Write-Set cache files.
See also
For more information, see Understanding GCache and Record-set Cache, and the Percona Database Performance Blog: All you need to know about GCache
Sample preamble key-value pairsVersion: 2\nGID: 3afaa71d-6665-11ed-98de-2aba4aabc65e\nsynced: 0\nenc_version: 1\nenc_encrypted: 1\nenc_mk_id: 3\nenc_mk_const_id: 3ad045a2-6665-11ed-a49d-cb7b9d88753f\nenc_mk_uuid: 3ad04c8e-6665-11ed-a947-c7e346da147f\nenc_fk_id: S4hRiibUje4v5GSQ7a+uuS6NBBX9+230nsPHeAXH43k=\nenc_crc: 279433530\n
"},{"location":"gcache-write-set-cache-encryption.html#key-descriptions","title":"Key descriptions","text":"The following table describes the encryption keys defined in the preamble. All other keys in the preamble are not related to encryption.
Key Descriptionenc_version
The encryption version enc_encrypted
If the GCache is encrypted or not enc_mk_id
A part of the Master Key ID. Rotating the Master Key increments the sequence number. enc_mk_const_id
A part of the Master Key ID, a constant Universally unique identifier (UUID). This option remains constant for the duration of the galera.gcache
file and simplifies matching the Masater Key inside the keyring to the instance that generated the keys. Deleting the galera.gcache
changes the value of this key. enc_mk_uuid
The first Master Key or if Galera detects that the preamble is inconsistent, which causes a full GCache reset and a new Master Key is required, generates this UUID. enc_fk_id
The File Key ID encrypted with the Master Key. enc_crc
The cyclic redundancy check (CRC) calculated from all encryption-related keys."},{"location":"gcache-write-set-cache-encryption.html#controlling-encryption","title":"Controlling encryption","text":"Encryption is controlled using the wsrep_provider_options.
Variable name Default value Allowed valuesgcache.encryption
off on/off gcache.encryption_cache_page_size
32KB 2-512 gcache.encryption_cache_size
16MB 2 - 512 allocator.disk_pages_encryption
off on/off allocator.encryption_cache_page_size
32KB allocator.encryption_cache_size
16MB"},{"location":"gcache-write-set-cache-encryption.html#rotate-the-gcache-master-key","title":"Rotate the GCache Master Key","text":"GCache and Write-Set cache encryption uses either a keyring plugin or a keyring component. This plugin or component must be loaded.
Store the keyring file outside the data directory when using a keyring plugin or a keyring component.
mysql> ALTER INSTANCE ROTATE GCACHE MASTER KEY;\n
"},{"location":"gcache-write-set-cache-encryption.html#variable-descriptions","title":"Variable descriptions","text":""},{"location":"gcache-write-set-cache-encryption.html#gcache-encryption","title":"GCache encryption","text":"The following sections describe the variables related to GCache encryption. All variables are read-only.
"},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption","title":"gcache.encryption","text":"Enable or disable GCache cache encryption.
"},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_page_size","title":"gcache.encryption_cache_page_size","text":"The size of the GCache encryption page. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.
"},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_size","title":"gcache.encryption_cache_size","text":"Every encrypted file has an encryption.cache, which consists of pages. Use gcache.encryption_cache_size
to configure the encryption.cache size.
Configure the page size in the cache with gcache.encryption_cache_page_size
.
The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.
The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.
"},{"location":"gcache-write-set-cache-encryption.html#write-set-cache-encryption","title":"Write-Set cache encryption","text":"The following sections describe the variables related to Write-Set cache encryption. All variables are read-only.
"},{"location":"gcache-write-set-cache-encryption.html#allocatordisk_pages_encryption","title":"allocator.disk_pages_encryption","text":"Enable or disable the Write-Set cache encryption.
"},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_page_size","title":"allocator.encryption_cache_page_size","text":"The size of the encryption cache for Write-Set pages. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.
"},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_size","title":"allocator.encryption_cache_size","text":"Every Write-Set encrypted file has an encryption.cache, which consists of pages. Use allocator.encryption_cache_size
to configure the size of the encryption.cache
.
Configure the page size in the cache with allocator.encryption_cache_page_size
.
The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.
The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.
"},{"location":"gcache-write-set-cache-encryption.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"get-started-cluster.html","title":"Get started with Percona XtraDB Cluster","text":"This guide describes the procedure for setting up Percona XtraDB Cluster.
Examples provided in this guide assume there are three Percona XtraDB Cluster nodes, as a common choice for trying out and testing:
Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63Note
Avoid creating a cluster with two or any even number of nodes, because this can lead to split brain.
The following procedure provides an overview with links to details for every step:
It is recommended to install from official Percona repositories:
On Red Hat and CentOS, install using YUM.
On Debian and Ubuntu, install using APT.
Configure all nodes with relevant settings required for write-set replication.
This includes path to the Galera library, location of other nodes, etc.
This must be the node with your main database, which will be used as the data source for the cluster.
Data on new nodes joining the cluster is overwritten in order to synchronize it with the cluster.
Although cluster initialization and node provisioning is performed automatically, it is a good idea to ensure that changes on one node actually replicate to other nodes.
To complete the deployment of the cluster, a high-availability proxy is required. We recommend installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries.
"},{"location":"get-started-cluster.html#percona-monitoring-and-management","title":"Percona Monitoring and Management","text":"Percona Monitoring and Management is the best choice for managing and monitoring Percona XtraDB Cluster performance. It provides visibility for the cluster and enables efficient troubleshooting.
"},{"location":"get-started-cluster.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#frm","title":".frm","text":"For each table, the server will create a file with the .frm
extension containing the table definition (for all storage engines).
An acronym for Atomicity
, Consistency
, Isolation
, Durability
.
Asynchronous replication is a technique where data is first written to the primary node. After the primary acknowledges the write, the data is written to secondary nodes.
"},{"location":"glossary.html#atomicity","title":"Atomicity","text":"This property guarantees that all updates of a transaction occur in the database or no updates occur. This guarantee also applies with a server exit. If a transaction fails, the entire operation rolls back.
"},{"location":"glossary.html#cluster-replication","title":"Cluster replication","text":"Normal replication path for cluster members.\u00a0Can be encrypted (not by default) and unicast or multicast (unicast by default). Runs on tcp port 4567 by default.
"},{"location":"glossary.html#consistency","title":"Consistency","text":"This property guarantees that each transaction that modifies the database takes it from one consistent state to another. Consistency is implied with Isolation.
"},{"location":"glossary.html#datadir","title":"datadir","text":"The directory in which the database server stores its databases. Most Linux distribution use /var/lib/mysql
by default.
The node elected to provide a state transfer (SST or IST).
"},{"location":"glossary.html#durability","title":"Durability","text":"Once a transaction is committed, it will remain so and is resistant to a server exit.
"},{"location":"glossary.html#foreign-key","title":"Foreign Key","text":"A referential constraint between two tables. Example: A purchase order in the purchase_orders table must have been made by a customer that exists in the customers table.
"},{"location":"glossary.html#general-availability-ga","title":"General availability (GA)","text":"A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.
"},{"location":"glossary.html#gtid","title":"GTID","text":"Global Transaction ID, in Percona XtraDB Cluster it consists of UUID
and an ordinal sequence number which denotes the position of the change in the sequence.
HAProxy
is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the net.
Default prefix for tablespace files, e.g., ibdata1
is a 10MB auto-extendable file that MySQL creates for the shared tablespace by default.
The Isolation guarantee means that no transaction can interfere with another. When transactions access data in a session, they also lock that data to prevent other operations on that data by other transaction.
"},{"location":"glossary.html#ist","title":"IST","text":"Incremental State Transfer. Functionality which instead of whole state snapshot can catch up with the group by receiving the missing writesets, but only if the writeset is still in the donor\u2019s writeset cache.
"},{"location":"glossary.html#innodb","title":"InnoDB","text":"Storage Engine
for MySQL and derivatives (Percona Server
, MariaDB
) originally written by Innobase Oy, since acquired by Oracle. It provides ACID
compliant storage engine with foreign key
support. InnoDB is the default storage engine on all platforms.
Jenkins is a continuous integration system that we use to help ensure the continued quality of the software we produce. It helps us achieve the aims of: * no failed tests in trunk on any platform * aid developers in ensuring merge requests build and test on all platforms * no known performance regressions (without a damn good explanation)
"},{"location":"glossary.html#joiner-node","title":"joiner node","text":"The node joining the cluster, usually a state transfer target.
"},{"location":"glossary.html#lsn","title":"LSN","text":"Log Serial Number. A term used in relation to the InnoDB
or XtraDB
storage engines. There are System-level LSNs and Page-level LSNs. The System LSN represents the most recent LSN value assigned to page changes. Each InnoDB page contains a Page LSN which is the max LSN for that page for changes that reside on the disk. This LSN is updated when the page is flushed to disk.
A fork of MySQL
that is maintained primarily by Monty Program AB. It aims to add features, fix bugs while maintaining 100% backwards compatibility with MySQL.
This file refers to the database server\u2019s main configuration file. Most Linux distributions place it as /etc/mysql/my.cnf
or /etc/my.cnf
, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
A MySQL
Storage Engine
that was the default until MySQL 5.5. It doesn\u2019t fully support transactions but in some scenarios may be faster than InnoDB
. Each table is stored on disk in 3 files: .frm
,i .MYD
, .MYI
.
An open source database that has spawned several distributions and forks. MySQL AB was the primary maintainer and distributor until bought by Sun Microsystems, which was then acquired by Oracle. As Oracle owns the MySQL trademark, the term MySQL is often used for the Oracle distribution of MySQL as distinct from the drop-in replacements such as MariaDB
and Percona Server
.
This user is used by the SST process to run the SQL commands needed for SST
, such as creating the mysql.pxc.sst.user
and assigning it the role mysql.pxc.sst.role
.
This role has all the privileges needed to run xtrabackup to create a backup on the donor node.
"},{"location":"glossary.html#mysqlpxcsstuser","title":"mysql.pxc.sst.user","text":"This user (set up on the donor node) is assigned the mysql.pxc.sst.role
and runs the XtraBackup to make backups. The password for this is randomly generated for each SST. The password is generated automatically for each SST
.
A cluster node \u2013 a single mysql instance that is in the cluster.
"},{"location":"glossary.html#numa","title":"NUMA","text":"Non-Uniform Memory Access (NUMA
) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. The whole system may still operate as one unit, and all memory is basically accessible from everywhere, but at a potentially higher latency and lower performance.
Percona\u2019s branch of MySQL
with performance and management improvements.
Percona XtraDB Cluster (PXC) is a high availability solution for MySQL.
"},{"location":"glossary.html#primary-cluster","title":"primary cluster","text":"A cluster with quorum.\u00a0A non-primary cluster will not allow any operations and will give Unknown command
errors on any clients attempting to read or write from the database.
A majority (> 50%) of nodes.\u00a0In the event of a network partition, only the cluster partition that retains a quorum (if any) will remain Primary by default.
"},{"location":"glossary.html#split-brain","title":"split brain","text":"Split brain occurs when two parts of a computer cluster are disconnected, each part believing that the other is no longer running. This problem can lead to data inconsistency.
"},{"location":"glossary.html#sst","title":"SST","text":"State Snapshot Transfer is the full copy of data from one node to another. It\u2019s used when a new node joins the cluster, it has to transfer data from an existing node. Percona XtraDB Cluster: uses the xtrabackup
program for this purpose. xtrabackup
does not require READ LOCK
for the entire syncing process - only for syncing the MySQL system tables and writing the information about the binlog, galera and replica information (same as the regular Percona XtraBackup backup).
The SST method is configured with the wsrep_sst_method
variable.
In PXC 8.0, the mysql-upgrade command is now run automatically as part of SST
. You do not have to run it manually when upgrading your system from an older version.
A Storage Engine
is a piece of software that implements the details of data storage and retrieval for a database system. This term is primarily used within the MySQL
ecosystem due to it being the first widely used relational database to have an abstraction layer around storage. It is analogous to a Virtual File System layer in an Operating System. A VFS layer allows an operating system to read and write multiple file systems (for example, FAT, NTFS, XFS, ext3) and a Storage Engine layer allows a database server to access tables stored in different engines (e.g. MyISAM
, InnoDB).
A tech preview item can be a feature, a variable, or a value within a variable. The term designates that the item is not yet ready for production use and is not included in support by SLA. A tech preview item is included in a release so that users can provide feedback. The item is either updated and released as general availability(GA) or removed if not useful. The item\u2019s functionality can change from tech preview to GA.
"},{"location":"glossary.html#uuid","title":"UUID","text":"Universally Unique IDentifier which uniquely identifies the state and the sequence of changes node undergoes. 128-bit UUID is a classic DCE UUID Version 1 (based on current time and MAC address). Although in theory this UUID could be generated based on the real MAC-address, in the Galera it is always (without exception) based on the generated pseudo-random addresses (\u201clocally administered\u201d bit in the node address (in the UUID structure) is always equal to unity).
"},{"location":"glossary.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"haproxy-config.html","title":"HAProxy configuration file","text":""},{"location":"haproxy-config.html#example-of-haproxy-v1-configuration-file","title":"Example of HAProxy v1 configuration file","text":"HAProxy v1 configuration fileglobal\n log 127.0.0.1 local0\n log 127.0.0.1 local1 notice\n maxconn 4096\n uid 99\n gid 99\n daemon\n #debug\n #quiet\n\ndefaults\n log global\n mode http\n option tcplog\n option dontlognull\n retries 3\n redispatch\n maxconn 2000\n contimeout 5000\n clitimeout 50000\n srvtimeout 50000\n timeout connect 160000\n timeout client 240000\n timeout server 240000\n\nlisten mysql-cluster 0.0.0.0:3306\n mode tcp\n balance roundrobin\n option mysql-check user root\n\n server db01 10.4.29.100:3306 check\n server db02 10.4.29.99:3306 check\n server db03 10.4.29.98:3306 check\n
Options set in the configuration file
"},{"location":"haproxy-config.html#differences-between-version-1-configuration-file-and-version-2-configuration-file","title":"Differences between version 1 configuration file and version 2 configuration file","text":""},{"location":"haproxy-config.html#version-declaration","title":"Version Declaration:","text":"v1: The configuration file typically omits an explicit version declaration.
v2: You must explicitly declare the version using the version keyword followed by the specific version number (e.g., version = 2.0).
"},{"location":"haproxy-config.html#global-parameters","title":"Global Parameters:","text":"v1 and v2: Both versions utilize a global section to define global parameters, but certain parameters might have different names or functionalities across versions. Refer to the official documentation for specific changes.
"},{"location":"haproxy-config.html#configuration-blocks","title":"Configuration Blocks:","text":"v1 and v2: Both versions use a similar indentation-based structure to define configuration blocks like frontend and backend. However, v2 introduces new blocks and keywords not present in v1 (e.g., process, http-errors).
"},{"location":"haproxy-config.html#directives","title":"Directives:","text":"v1 and v2: While many directives remain consistent, some might have renamed keywords, altered syntax, or entirely new functionalities in v2. Consult the official documentation for a comprehensive comparison of directives and their usage between versions.
"},{"location":"haproxy-config.html#comments","title":"Comments:","text":"v1 and v2: Both versions support comments using the # symbol. However, v2 introduces multi-line comments using / \u2026 / syntax, which v1 does not support.
"},{"location":"haproxy-config.html#version-2-configuration-file","title":"Version 2 configuration file","text":"This simplified example is for load balancing. HAProxy offers numerous features for advanced configurations and fine-tuning.
This example demonstrates a basic HAProxy v2 configuration file for load-balancing HTTP traffic across two backend servers.
"},{"location":"haproxy-config.html#global-section","title":"Global Section","text":"The following settings are defined in the Global section:
The maximum number of concurrent connections allowed by HAProxy.
The user and group under which HAProxy should run.
A UNIX socket for accessing HAProxy statistics.
In the defaults
block, we set the operating mode to TCP and define option tcpka
global\n maxconn 4000 # Maximum concurrent connections (adjust as needed)\n user haproxy # User to run HAProxy process\n group haproxy # Group to run HAProxy process\n stats socket /var/run/haproxy.sock mode 666 level admin\n\ndefaults\n mode tcp # Set operating mode to TCP\n #option tcpka\n
"},{"location":"haproxy-config.html#frontend-section","title":"Frontend Section","text":"The following settings are defined in this section:
Create a frontend named \u201cwebserver\u201d that listens on port 80 for incoming HTTP requests.
Enable the httpclose
option to terminate idle client connections efficiently.
Specify the default backend for this frontend.
frontend gr-prod-rw\n bind 0.0.0.0:3307 \n mode tcp\n option contstats\n option dontlognull\n option clitcpka\n default_backend gr-prod-rw\n
You should add the following options:
option Descriptioncontstats
Provides continuous updates to the statistics of your connections. This option ensures that your traffic counters are updated in real-time, rather than only after a connection closes, giving you a more accurate and immediate view of your traffic patterns. dontlognull
Does not log requests that don\u2019t transfer any data, like health check pings. clitcpka
Configures TCP keepalive settings for client connections. This option allows the operating system to detect and terminate inactive connections, even if HAProxy isn\u2019t actively checking them."},{"location":"haproxy-config.html#backend-section","title":"Backend Section","text":"In this section, you specify the backend servers that will handle requests forwarded by the frontend. List each server with their respective IP addresses, ports, and weights.
You set up a health check with check inter 10000
. This option means that HAProxy performs a health check on each server every 10,000 milliseconds or 10 seconds. If a server fails a health check, it is temporarily removed from the pool until it passes subsequent checks, ensuring smooth and reliable client service. This proactive monitoring is crucial for maintaining an efficient and uninterrupted backend service.
Set the number of retries to put the service down and up. For example, you set the rise
parameter to 1
, which means the server only needs to pass one health check before the server is considered healthy again. The fall
parameter is set to 2
, requiring two consecutive failed health checks before the server is marked as unhealthy.
The weight 50 backup
setting is crucial for load balancing; this setting determines that this server only receives traffic if the primary servers are down. The weight of 50 indicates the relative amount of traffic the server will handle compared to other servers in the backup role. This method ensures the server can handle a significant load even in backup mode, but not as much as a primary server.
The following example lists these options. Replace the server details (IP addresses, ports) with your backend server information. Adjust weights and other options according to your specific needs and server capabilities.
backend servers\n server server1 10.0.68.39:3307 check inter 10000 rise 1 fall 2 weight 50\n server server1 10.0.68.74:3307 check inter 10000 rise 1 fall 2 weight 50 backup\n server server1 10.0.68.20:3307 check inter 10000 rise 1 fall 2 weight 1 backup\n
More information about how to configure HAProxy
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"haproxy.html","title":"Load balancing with HAProxy","text":"The free and open source software, HAProxy, provides a high-availability load balancer and reverse proxy for TCP and HTTP-based applications. HAProxy can distribute requests across multiple servers, ensuring optimal performance and security.
Here are the benefits of using HAProxy:
HAProxy supports layer 4 (TCP) and layer 7 (HTTP) load balancing, which means it can handle different network traffic and protocols. HAProxy requires patched backends to tunnel IP traffic in layer 4 load-balancing tunnel mode. This mode also disables some layer 7 advanced features.
HAProxy has rich features, such as URL rewriting, SSL/TLS termination, gzip compression, caching, observability, health checks, retries, circuit breakers, WebSocket, HTTP/2 and HTTP/3 support, and more.
HAProxy has a reputation for being fast and efficient in terms of processor and memory usage. The software is written in C and has an event-driven and multithreaded architecture.
HAProxy has a user-friendly status page that shows detailed information about the load balancer and the backends. The software also integrates well with third-party monitoring tools and services.
HAProxy supports session retention and cookie guidance, which can help with sticky sessions and affinity.
Access the server as a user with administrative privileges, either root
or use sudo.
Create a Dedicated HAProxy user account for HAProxy to interact with your MySQL instance. This account enhances security.
Make the following changes to the example CREATE USER
command to replace the placeholders:
Replace haproxy_user with your preferred username.
Substitute haproxy_server_ip
with the actual IP address of your HAProxy server.
Choose a robust password for the \u2018strong_password\u2019.
Execute the following command:
mysql> CREATE USER 'haproxy_user'@'haproxy_server_ip' IDENTIFIED BY 'strong_password';\n
Grant the minimal set of privileges necessary for HAProxy to perform its health checks and monitoring.
Execute the following:
GRANT SELECT ON `mysql`.* TO 'haproxy_user'@'haproxy_server_ip';\nFLUSH PRIVILEGES;\n
"},{"location":"haproxy.html#important-considerations","title":"Important Considerations","text":"If your MySQL servers are part of a replication cluster, create the user and grant privileges on each node to ensure consistency.
For enhanced security, consider restricting the haproxy_user
to specific databases or tables to monitor rather than granting permissions to the entire mysql
database schema.
Add the HAProxy Enterprise repository to your system by following the instructions for your operating system.
Install HAProxy on the node you intend to use for load balancing. You can install it using the package manager.
On a Debian-derived distributionOn a Red Hat-derived distribution$ sudo apt update\n$ sudo apt install haproxy\n
$ sudo yum update\n$ sudo yum install haproxy\n
To start HAProxy, use the haproxy
command. You may pass any number of configuration parameters on the command line. To use a configuration file, add the -f
option.
$ # Passing one configuration file\n$ sudo haproxy -f haproxy-1.cfg\n\n$ # Passing multiple configuration files\n$ sudo haproxy -f haproxy-1.cfg haproxy-2.cfg\n\n$ # Passing a directory\n$ sudo haproxy -f conf-dir\n
You can pass the name of an existing configuration file or a directory. HAProxy includes all files with the .cfg extension in the supplied directory. Another way to pass multiple files is to use -f
multiple times.
For more information, see HAProxy Management Guide
For information, see HAProxy configuration file
Important
In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password
. HAProxy does not support this authentication plugin. Create a mysql user using the mysql_native_password
authentication plugin.
mysql> CREATE USER 'haproxy_user'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\n
See also
MySQL Documentation: CREATE USER statement
"},{"location":"haproxy.html#uninstall","title":"Uninstall","text":"To uninstall haproxy version 2 from a Linux system, follow the latest instructions.
"},{"location":"haproxy.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"high-availability.html","title":"High availability","text":"In a basic setup with three nodes, if you take any of the nodes down, Percona XtraDB Cluster continues to function. At any point, you can shut down any node to perform maintenance or configuration changes.
Even in unplanned situations (like a node crashing or if it becomes unavailable over the network), you can run queries on working nodes. If a node is down and the data has changed, there are two methods that the node may use when it joins the cluster again:
Method What happens Description SST The joiner node receives a full copy of the database state from the donor node. You initiate a Solid State Transfer (SST) when adding a new node to a Galera cluster or when a node has fallen too far out of sync IST Only incremental changes are copied from one node to another. This operation can be used when a node is down for a short period."},{"location":"high-availability.html#sst","title":"SST","text":"The primary benefit of SST is that it ensures data consistency across the cluster by providing a complete snapshot of the database at a point in time. However, SST can be resource-intensive and time-consuming if the operation transfers significant data. The donor node is locked during this transfer, impacting cluster performance.
You initiate a state snapshot transfer (SST) when a node joins a cluster without the complete data set. This process involves transferring a full data copy from one node to another, ensuring that the joining node has an exact replica of the cluster\u2019s current state. Technically, SST is performed by halting the donor node\u2019s database operations momentarily to create a consistent snapshot of its data. The snapshot is then transferred over the network to the joining node, which applies it to its database system.
Even without locking your cluster in a read-only state, SST may be intrusive and disrupt the regular operation of your services. IST avoids disruption. A node fetches only the changes that happened while that node was unavailable. IST uses a caching mechanism on nodes.
"},{"location":"high-availability.html#ist","title":"IST","text":"Incremental State Transfer (IST) is a method that allows a node to request only the missing transactions from another node in the cluster. This process is beneficial because it reduces the amount of data that must be transferred, leading to faster recovery times for nodes that are out of sync. Additionally, IST minimizes the network bandwidth required for state transfer, which is particularly advantageous in environments with limited resources.
However, there are drawbacks to consider. Reliance on another node\u2019s state means that an SST operation is necessary if no node in the cluster has the required information.
When a node joins the cluster with a state slightly behind the current cluster state, IST does not require the joining node to copy the entire database state. Technically, IST transfers only the missing write-sets that the joining node needs to catch up with the cluster. The donor node, the node with the most recent state, sends the write-sets to the joining node through a dedicated channel. The joining node then applies these write-sets to its database state incrementally until it synchronizes with the cluster\u2019s current state. The donor node can experience a performance impact during an IST operation, typically less severe than during SST.
"},{"location":"high-availability.html#monitor-the-node-state","title":"Monitor the node state","text":"The wsrep_state_comment
variable returns the current state of a Galera node in the cluster, providing information about the node\u2019s role and status. The value can vary depending on the specific state of the Galera node, such as the following:
\u201cSynced\u201d
\u201cDonor/Desynced\u201d
\u201cDonor/Joining\u201d
\u201cJoined\u201d
You can monitor the current state of a node using the following command:
mysql> SHOW STATUS LIKE 'wsrep_local_state_comment';\n
If the node is in Synced (6)
state, that node is part of the cluster and can handle the traffic.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"install-index.html","title":"Install Percona XtraDB Cluster","text":"Install Percona XtraDB Cluster on all hosts that you are planning to use as cluster nodes and ensure that you have root access to the MySQL server on each one.
We gather Telemetry data in the Percona packages and Docker images.
"},{"location":"install-index.html#ports-required","title":"Ports required","text":"Open specific ports for the Percona XtraDB Cluster to function correctly.
Port 3306 is the default port for MySQL. This port facilitates communication and data transfer between nodes and applications.
Port 4567 is used for Galera replication traffic, which is vital for synchronizing data across the cluster nodes.
Port 4568 is used for Incremental State Transfer (IST), allowing nodes to transfer only the missing blocks of data.
Port 4444 is for State Snapshot Transfer (SST), which involves a complete data snapshot transfer from one node to another.
Port 9200 if you use Percona Monitoring and Management (PMM) for cluster monitoring.
We recommend installing Percona XtraDB Cluster from official Percona software repositories using the corresponding package manager for your system:
Debian or Ubuntu
Red Hat or CentOS
Important
After installing Percona XtraDB Cluster, the mysql
service is stopped but enabled so that it may start the next time you restart the system. The service starts if the the grastate.dat
file exists and the value of seqno
is not -1.
See also
More information about Galera state information in Index of files created by PXC grastat.dat
"},{"location":"install-index.html#installation-alternatives","title":"Installation alternatives","text":"Percona also provides a generic tarball with all required files and binaries for manual installation:
If you want to build Percona XtraDB Cluster from source, see Compiling and Installing from Source Code.
If you want to run Percona XtraDB Cluster using Docker, see Running Percona XtraDB Cluster in a Docker Container.
"},{"location":"install-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"intro.html","title":"About Percona XtraDB Cluster","text":"Percona XtraDB Cluster is a fully open-source high-availability solution for MySQL. It integrates Percona Server for MySQL and Percona XtraBackup with the Galera library to enable synchronous multi-source replication.
A cluster consists of nodes, where each node contains the same set of data synchronized accross nodes. The recommended configuration is to have at least 3 nodes, but you can have 2 nodes as well. Each node is a regular MySQL Server instance (for example, Percona Server). You can convert an existing MySQL Server instance to a node and run the cluster using this node as a base. You can also detach any node from the cluster and use it as a regular MySQL Server instance.
"},{"location":"intro.html#benefits","title":"Benefits","text":"When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access.
No central management. You can loose any node at any point of time, and the cluster will continue to function without any data loss.
Good solution for scaling a read workload. You can put read queries to any of the nodes.
Overhead of provisioning new node. When you add a new node, it has to copy the full data set from one of existing nodes. If it is 100 GB, it copies 100 GB.
This can\u2019t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes versus all traffic to 1 node, but you can\u2019t expect a lot. All writes still have to go on all nodes.
You have several duplicates of data: for 3 nodes you have 3 duplicates.
Percona XtraDB Cluster https://www.percona.com/software/mysql-database/percona-xtradb-cluster is based on Percona Server for MySQL running with the XtraDB storage engine. It uses the Galera library, which is an implementation of the write set replication (wsrep) API developed by Codership Oy. The default and recommended data transfer method is via Percona XtraBackup .
"},{"location":"intro.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"limitation.html","title":"Percona XtraDB Cluster limitations","text":"The following limitations apply to Percona XtraDB Cluster:
Replication works only with InnoDB storage engine.
Any writes to tables of other types are not replicated.
Unsupported queries:
LOCK TABLES
and UNLOCK TABLES
is not supported in multi-source setups
Lock functions, such as GET_LOCK()
, RELEASE_LOCK()
, and so on
Query log cannot be directed to table.
If you enable query logging, you must forward the log to a file:
log_output = FILE\n
Use general_log
and general_log_file
to choose query logging and the log file name.
Maximum allowed transaction size is defined by the wsrep_max_ws_rows
and wsrep_max_ws_size
variables.
LOAD DATA INFILE
processing will commit every 10 000 rows. So large transactions due to LOAD DATA
will be split to series of small transactions.
Transaction issuing COMMIT
may still be aborted at that stage.
Due to cluster-level optimistic concurrency control, there can be two transactions writing to the same rows and committing in separate Percona XtraDB Cluster nodes, and only one of them can successfully commit. The failing one will be aborted. For cluster-level aborts, Percona XtraDB Cluster gives back deadlock error code:
Error message(Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).\n
XA transactions are not supported
Due to possible rollback on commit.
Write throughput of the whole cluster is limited by the weakest node.
If one node becomes slow, the whole cluster slows down. If you have requirements for stable high performance, then it should be supported by corresponding hardware.
Minimal recommended size of cluster is 3 nodes.
The 3rd node can be an arbitrator.
enforce_storage_engine=InnoDB
is not compatible with wsrep_replicate_myisam=OFF
wsrep_replicate_myisam
is set to OFF
by default.
Avoid ALTER TABLE ... IMPORT/EXPORT
workloads when running Percona XtraDB Cluster in cluster mode.
It can lead to node inconsistency if not executed in sync on all nodes.
All tables must have a primary key.
This ensures that the same rows appear in the same order on different nodes. The DELETE
statement is not supported on tables without a primary key.
See also
Galera Documentation: Tables without Primary Keys
Avoid reusing the names of persistent tables for temporary tables
Although MySQL does allow having temporary tables named the same as persistent tables, this approach is not recommended.
Galera Cluster blocks the replication of those persistent tables the names of which match the names of temporary tables.
With wsrep_debug set to 1, the error log may contain the following message:
Error message... [Note] WSREP: TO BEGIN: -1, 0 : create table t (i int) engine=innodb\n... [Note] WSREP: TO isolation skipped for: 1, sql: create table t (i int) engine=innodb.Only temporary tables affected.\n
See also
MySQL Documentation: Problems with temporary tables
As of version 8.0.21, an INPLACE ALTER TABLE query takes an internal shared lock on the table during the execution of the query. The LOCK=NONE
clause is no longer allowed for all of the INPLACE ALTER TABLE queries due to this change.
This change addresses a deadlock, which could cause a cluster node to hang in the following scenario:
An INPLACE ALTER TABLE
query in one session or being applied as Total Order Isolation (TOI)
A DML on the same table from another session
Do not use one or more dot characters (.) when defining the values for the following variables:
log_bin
log_bin_index
MySQL and XtraBackup handles the value in different ways and this difference causes unpredictable behavior.
"},{"location":"limitation.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"load-balance-proxysql.html","title":"Load balance with ProxySQL","text":"ProxySQL is a high-performance SQL proxy. ProxySQL runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.
The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.
The proxy is designed to run continuously without needing to be restarted. Most configuration can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.
See also
ProxySQL Documentation
ProxySQL v2 natively supports Percona XtraDB Cluster. With this version, proxysql-admin
tool does not require any custom scripts to keep track of Percona XtraDB Cluster status.
Important
In version 8.0, Percona XtraDB Cluster does not support ProxySQL v1.
"},{"location":"load-balance-proxysql.html#manual-configuration","title":"Manual configuration","text":"This section describes how to configure ProxySQL with three Percona XtraDB Cluster nodes.
Node Host Name IP address Node 1 pxc1 192.168.70.71 Node 2 pxc2 192.168.70.72 Node 3 pxc3 192.168.70.73 Node 4 proxysql 192.168.70.74ProxySQL can be configured either using the /etc/proxysql.cnf
file or through the admin interface. The admin interface is recommended because this interface can dynamically change the configuration without restarting the proxy.
To connect to the ProxySQL admin interface, you need a mysql
client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql
client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally. For this tutorial, install Percona XtraDB Cluster on Node 4:
Changes in the installation procedure
In Percona XtraDB Cluster 8.0, ProxySQL is not installed automatically as a dependency of the percona-xtradb-cluster-client-8.0
package. You should install the proxysql
package separately.
Note
ProxySQL has multiple versions in the version 2 series.
root@proxysql:~# apt install percona-xtradb-cluster-client\nroot@proxysql:~# apt install proxysql2\n
$ sudo yum install Percona-XtraDB-Cluster-client-80\n$ sudo yum install proxysql2\n
To connect to the admin interface, use the credentials, host name and port specified in the global variables.
Warning
Do not use default credentials in production!
The following example shows how to connect to the ProxySQL admin interface with default credentials:
root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql@proxysql>\n
To see the ProxySQL databases and tables use the following commands:
mysql@proxysql> SHOW DATABASES;\n
The following output shows the ProxySQL databases:
Expected output+-----+---------+-------------------------------+\n| seq | name | file |\n+-----+---------+-------------------------------+\n| 0 | main | |\n| 2 | disk | /var/lib/proxysql/proxysql.db |\n| 3 | stats | |\n| 4 | monitor | |\n+-----+---------+-------------------------------+\n4 rows in set (0.00 sec)\n
mysql@proxysql> SHOW TABLES;\n
The following output shows the ProxySQL tables:
Expected output+--------------------------------------+\n| tables |\n+--------------------------------------+\n| global_variables |\n| mysql_collations |\n| mysql_query_rules |\n| mysql_replication_hostgroups |\n| mysql_servers |\n| mysql_users |\n| runtime_global_variables |\n| runtime_mysql_query_rules |\n| runtime_mysql_replication_hostgroups |\n| runtime_mysql_servers |\n| runtime_scheduler |\n| scheduler |\n+--------------------------------------+\n12 rows in set (0.00 sec)\n
For more information about admin databases and tables, see Admin Tables
Note
The ProxySQL configuration can reside in the following areas:
MEMORY (your current working place)
RUNTIME (the production settings)
DISK (durable configuration, saved inside an SQLITE database)
When you change a parameter, you change it in MEMORY area. This ability is by design and lets you test the changes before pushing the change to production (RUNTIME), or save the change to disk.
"},{"location":"load-balance-proxysql.html#add-cluster-nodes-to-proxysql","title":"Add cluster nodes to ProxySQL","text":"To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers
table.
Note
ProxySQL uses the concept of hostgroups to group cluster nodes. This enables you to balance the load in a cluster by routing different types of traffic to different groups. There are many ways you can configure hostgroups (for example, source and replicas, read and write load, etc.) and a every node can be a member of multiple hostgroups.
This example adds three Percona XtraDB Cluster nodes to the default hostgroup (0
), which receives both write and read traffic:
mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.71',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.72',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.73',3306);\n
To see the nodes:
mysql@proxysql> SELECT * FROM mysql_servers;\n
The following output shows the list of nodes:
Expected output+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| 0 | 192.168.70.71 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |\n| 0 | 192.168.70.72 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |\n| 0 | 192.168.70.73 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n3 rows in set (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#create-proxysql-monitoring-user","title":"Create ProxySQL monitoring user","text":"To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE
privilege on any node in the cluster and configure the user in ProxySQL.
The following example shows how to add a monitoring user on Node 2:
mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\nmysql@pxc2> GRANT USAGE ON *.* TO 'proxysql'@'%';\n
The following example shows how to configure this user on the ProxySQL node:
mysql@proxysql> UPDATE global_variables SET variable_value='proxysql'\n WHERE variable_name='mysql-monitor_username';\nmysql@proxysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\n WHERE variable_name='mysql-monitor_password';\n
To load this configuration at runtime, issue a LOAD
command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue a SAVE
command.
mysql@proxysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql@proxysql> SAVE MYSQL VARIABLES TO DISK;\n
To ensure that monitoring is enabled, check the monitoring logs:
mysql@proxysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+----------------------+---------------+\n| hostname | port | time_start_us | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695 | NULL |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779 | NULL |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627 | NULL |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557 | NULL |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737 | NULL |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447 | NULL |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+-------------------+------------+\n| hostname | port | time_start_us | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948 | NULL |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803 | NULL |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711 | NULL |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783 | NULL |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631 | NULL |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542 | NULL |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n
The previous examples show that ProxySQL is able to connect and ping the nodes you have added.
To enable monitoring of these nodes, load them at runtime:
mysql@proxysql> LOAD MYSQL SERVERS TO RUNTIME;\n
"},{"location":"load-balance-proxysql.html#create-proxysql-client-user","title":"Create ProxySQL client user","text":"ProxySQL must have users that can access backend nodes to manage connections.
To add a user, insert credentials into mysql_users
table:
mysql@proxysql> INSERT INTO mysql_users (username,password) VALUES ('sbuser','sbpass');\n
Expected output Query OK, 1 row affected (0.00 sec)\n
Note
ProxySQL currently doesn\u2019t encrypt passwords.
Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):
mysql@proxysql> LOAD MYSQL USERS TO RUNTIME;\nmysql@proxysql> SAVE MYSQL USERS TO DISK;\n
To confirm that the user has been set up correctly, you can try to log in as root:
root@proxysql:~# mysql -u sbuser -psbpass -h 127.0.0.1 -P 6033\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n
To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:
mysql@pxc3> CREATE USER 'sbuser'@'192.168.70.74' IDENTIFIED BY 'sbpass';\n
Expected output Query OK, 0 rows affected (0.01 sec)\n
mysql@pxc3> GRANT ALL ON *.* TO 'sbuser'@'192.168.70.74';\n
Expected output Query OK, 0 rows affected (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#test-cluster-with-sysbench","title":"Test cluster with sysbench","text":"You can install sysbench
from Percona software repositories:
root@proxysql:~# apt install sysbench\n
root@proxysql:~# yum install sysbench\n
Note
sysbench
requires ProxySQL client user credentials that you created in Creating ProxySQL Client User.
Create the database that will be used for testing on one of the Percona XtraDB Cluster nodes:
mysql@pxc1> CREATE DATABASE sbtest;\n
Populate the table with data for the benchmark on the ProxySQL node:
root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nprepare\n
Run the benchmark on the ProxySQL node:
root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nrun\n
ProxySQL stores collected data in the stats
schema:
mysql@proxysql> SHOW TABLES FROM stats;\n
Expected output +--------------------------------+\n| tables |\n+--------------------------------+\n| stats_mysql_query_rules |\n| stats_mysql_commands_counters |\n| stats_mysql_processlist |\n| stats_mysql_connection_pool |\n| stats_mysql_query_digest |\n| stats_mysql_query_digest_reset |\n| stats_mysql_global |\n+--------------------------------+\n
For example, to see the number of commands that run on the cluster:
mysql@proxysql> SELECT * FROM stats_mysql_commands_counters;\n
Expected output +---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| Command | Total_Time_us | Total_cnt | cnt_100us | cnt_500us | cnt_1ms | cnt_5ms | cnt_10ms | cnt_50ms | cnt_100ms | cnt_500ms | cnt_1s | cnt_5s | cnt_10s | cnt_INFs |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| ALTER_TABLE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| ANALYZE_TABLE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| BEGIN | 2212625 | 3686 | 55 | 2162 | 899 | 569 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| CHANGE_REPLICATION_SOURCE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| COMMIT | 21522591 | 3628 | 0 | 0 | 0 | 1765 | 1590 | 272 | 1 | 0 | 0 | 0 | 0 | 0 |\n| CREATE_DATABASE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| CREATE_INDEX | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n...\n| DELETE | 2904130 | 3670 | 35 | 1546 | 1346 | 723 | 19 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |\n| DESCRIBE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n...\n| INSERT | 19531649 | 3660 | 39 | 1588 | 1292 | 723 | 12 | 2 | 0 | 1 | 0 | 1 | 2 | 0 |\n...\n| SELECT | 35049794 | 51605 | 501 | 26180 | 16606 | 8241 | 70 | 3 | 4 | 0 | 0 | 0 | 0 | 0 |\n| SELECT_FOR_UPDATE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n...\n| UPDATE | 6402302 | 7367 | 75 | 2503 | 3020 | 1743 | 23 | 3 | 0 | 0 | 0 | 0 | 0 | 0 |\n| USE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| SHOW | 19691 | 2 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |\n| UNKNOWN | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n45 rows in set (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#automatic-failover","title":"Automatic failover","text":"ProxySQL will automatically detect if a node is not available or not synced with the cluster.
You can check the status of all available nodes by running:
mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
The following output shows the status of all available nodes:
Expected output+--------------+---------------+------+--------+\n| hostgroup_id | hostname | port | status |\n+--------------+---------------+------+--------+\n| 0 | 192.168.70.71 | 3306 | ONLINE |\n| 0 | 192.168.70.72 | 3306 | ONLINE |\n| 0 | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n
To test problem detection and fail-over mechanism, shut down Node 3:
root@pxc3:~# service mysql stop\n
ProxySQL will detect that the node is down and update its status to OFFLINE_SOFT
:
mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
Expected output +--------------+---------------+------+--------------+\n| hostgroup_id | hostname | port | status |\n+--------------+---------------+------+--------------+\n| 0 | 192.168.70.71 | 3306 | ONLINE |\n| 0 | 192.168.70.72 | 3306 | ONLINE |\n| 0 | 192.168.70.73 | 3306 | OFFLINE_SOFT |\n+--------------+---------------+------+--------------+\n3 rows in set (0.00 sec)\n
Now start Node 3 again:
root@pxc3:~# service mysql start\n
The script will detect the change and mark the node as ONLINE
:
mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
Expected output +--------------+---------------+------+--------+\n| hostgroup_id | hostname | port | status |\n+--------------+---------------+------+--------+\n| 0 | 192.168.70.71 | 3306 | ONLINE |\n| 0 | 192.168.70.72 | 3306 | ONLINE |\n| 0 | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#assisted-maintenance-mode","title":"Assisted maintenance mode","text":"Usually, to take a node down for maintenance, you need to identify that node, update its status in ProxySQL to OFFLINE_SOFT
, wait for ProxySQL to divert traffic from this node, and then initiate the shutdown or perform maintenance tasks. Percona XtraDB Cluster includes a special maintenance mode for nodes that enables you to take a node down without adjusting ProxySQL manually.
Initiating pxc_maint_mode=MAINTENANCE
does not disconnect existing connections. You must terminate these connections by either running your application code or forcing a re-connection. With a re-connection, the new connections are re-routed around the PXC node in MAINTENANCE
mode.
Assisted maintenance mode is controlled via the pxc_maint_mode
variable, which is monitored by ProxySQL and can be set to one of the following values:
DISABLED
: This value is the default state that tells ProxySQL to route traffic to the node as usual.
SHUTDOWN
: This state is set automatically when you initiate node shutdown.
You may need to shut down a node when upgrading the OS, adding resources, changing hardware parts, relocating the server, etc.
When you initiate node shutdown, Percona XtraDB Cluster does not send the signal immediately. Intead, it changes the state to pxc_maint_mode=SHUTDOWN
and waits for a predefined period (10 seconds by default). When ProxySQL detects that the mode is set to SHUTDOWN
, it changes the status of this node to OFFLINE_SOFT
. This status stops creating new node connections. After the transition period, long-running active transactions are aborted.
MAINTENANCE
: You can change to this state if you need to perform maintenance on a node without shutting it down.
You may need to isolate the node for a specific time so that it does not receive traffic from ProxySQL while you resize the buffer pool, truncate the undo log, defragment, or check disks, etc.
To do this, manually set pxc_maint_mode=MAINTENANCE
. Control is not returned to the user for a predefined period (10 seconds by default). You can increase the transition period using the pxc_maint_transition_period
variable to accommodate long-running transactions. If the period is long enough for all transactions to finish, there should be little disruption in the cluster workload. If you increase the transition period, the packaging script may determine the wait as a server stall.
When ProxySQL detects that the mode is set to MAINTENANCE
, it stops routing traffic to the node. During the transition period, any existing connections continue, but ProxySQL avoids opening new connections and starting transactions. Still, the user can open connections to monitor status.
Once control is returned, you can perform maintenance activity.
Note
Data changes continue to be replicated across the cluster.
After you finish maintenance, set the mode back to DISABLED
. When ProxySQL detects this, it starts routing traffic to the node again.
Related sections
Setting up a testing environment with ProxySQL
"},{"location":"load-balance-proxysql.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"monitoring.html","title":"Monitor the cluster","text":"Each node can have a different view of the cluster. There is no centralized node to monitor. To track down the source of issues, you have to monitor each node independently.
Values of many variables depend on the node from which you are querying. For example, replication sent from a node and writes received by all other nodes.
Having data from all nodes can help you understand where flow messages are coming from, which node sends excessively large transactions, and so on.
"},{"location":"monitoring.html#manual-monitoring","title":"Manual monitoring","text":"Manual cluster monitoring can be performed using myq-tools.
"},{"location":"monitoring.html#alerting","title":"Alerting","text":"Besides standard MySQL alerting, you should use at least the following triggers specific to Percona XtraDB Cluster:
wsrep_cluster_status
!= Primary
wsrep_connected
!= ON
wsrep_ready
!= ON
For additional alerting, consider the following:
Excessive replication conflicts can be identtified using the wsrep_local_cert_failures
and wsrep_local_bf_aborts
variables
Excessive flow control messages can be identified using the wsrep_flow_control_sent
and wsrep_flow_control_recv
variables
Large replication queues can be identified using the wsrep_local_recv_queue
.
Cluster metrics collection for long-term graphing should be done at least for the following:
wsrep_local_recv_queue
and wsrep_local_send_queue
wsrep_flow_control_sent
and wsrep_flow_control_recv
wsrep_replicated
and wsrep_received
wsrep_replicated_bytes
and wsrep_received_bytes
wsrep_local_cert_failures
and wsrep_local_bf_aborts
Percona Monitoring and Management includes two dashboards to monitor PXC:
PXC/Galera Cluster Overview:
PXC/Galera Graphs:
These dashboards are available from the menu:
Please refer to the official documentation for details on Percona Monitoring and Management installation and setup.
"},{"location":"monitoring.html#other-reading","title":"Other reading","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"nbo.html","title":"Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU)","text":"An Online Schema Upgrade can be a daily issue in an environment with accelerated development and deployment. The task becomes more difficult as the data grows. An ALTER TABLE
statement is a multi-step operation and must run until it is complete. Aborting the statement may be more expensive than letting it complete.
The Non-Blocking Operations (NBO) method is similar to the TOI
method (see Online Schema Upgrade for more information on the available types of online schema upgrades). Every replica processes the DDL statement at the same point in the cluster transaction stream, and other transactions cannot commit during the operation. The NBO
method provides a more efficient locking strategy and avoids the TOI
issue of long-running DDL statements blocking cluster updates.
In the NBO method, the supported DDL statement acquires a metadata lock on the table or schema at a late stage of the operation. The lock_wait_timeout
system variable defines the timeout, measured in seconds, to acquire metadata locks. The default value, 3153600, could cause infinite
waits and should not be used with the NBO
method.
Attempting a State Snapshot Transfer (SST) fails during the NBO operation.
To dynamically set the NBO
mode in the client, run the following statement:
SET SESSION wsrep_OSU_method='NBO';\n
"},{"location":"nbo.html#supported-ddl-statements","title":"Supported DDL statements","text":"The NBO method supports the following DDL statements:
ALTER TABLE
ALTER INDEX
CREATE INDEX
DROP INDEX
The NBO
method does not support the following:
Running two DDL statements with conflicting locks on the same table. For example, you cannot run two ALTER TABLE
statements for an employees
table.
Modifying a table changed during the NBO operation. However, you can modify other tables and execute NBO queries on other tables.
See the Percona XtraDB Cluster 8.0.25-15.1 Release notes for the latest information.
"},{"location":"nbo.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"online-schema-upgrade.html","title":"Online schema upgrade","text":"Database schemas must change as applications change. For a cluster, the schema upgrade must occur while the system is online. A synchronous cluster requires all active nodes have the same data. Schema updates are performed using Data Definition Language (DDL) statements, such as ALTER TABLE <table_name> DROP COLUMN <column_name>
.
The DDL statements are non-transactional, so these statements use up-front locking to avoid the chance of deadlocks and cannot be rolled back. We recommend that you test your schema changes, especially if you must run an ALTER
statement on large tables. Verify the backups before updating the schemas in the production environment. A failure in a schema change can cause your cluster to drop nodes and lose data.
Percona XtraDB Cluster supports the following methods for making online schema changes:
Method Name Reason for use Description TOI or Total Order Isolation Consistency is important. Other transactions are blocked while the cluster processes the DDL statements. This is the default method for the wsrep-OSU-method variable. The isolation of the DDL statement guarantees consistency. The DDL replication uses a Statement format. Each node processes the replicated DDL statement at same position in the replication stream. All other writes must wait until the DDL statement is executed. While a DDL statement is running, any long-running transactions in progress and using the same resource receive a deadlock error at commit and are rolled back. The pt-online-schema-change in the Percona Toolkit can alter the table without using locks. There are limitations: only InnoDB tables can be altered, and thewsrep_OSU_method
must be TOI
. RSU or Rolling Schema Upgrade This method guarantees high availability during the schema upgrades. The node desynchronizes with the cluster and disables flow control during the execution of the DDL statement. The rest of the cluster is not affected. After the statement execution, the node applies delayed events and synchronizes with the cluster. Although the cluster is active, during the process some nodes have the newer schema and some nodes have the older schema. The RSU method is a manual operation. For this method, the gcache
must be large enough to store the data for the duration of the DDL change. NBO or Non-Blocking Operation This method is used when consistency is important and uses a more efficient locking strategy. This method is similar to TOI
. DDL operations acquire an exclusive metadata lock on the table or schema at a late stage of the operation when updating the table or schema definition. Attempting a State Snapshot Transfer (SST) fails during the NBO operation. This mode uses a more efficient locking strategy and avoids the TOI
issue of long-running DDL statements blocking other updates in the cluster."},{"location":"online-schema-upgrade.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"performance-schema-instrumentation.html","title":"Perfomance Schema instrumentation","text":"To improve monitoring Percona XtraDB Cluster has implemented an infrastructure to expose Galera instruments (mutexes, cond-variables, files, threads) as a part of PERFORMANCE_SCHEMA
.
Although mutexes and condition variables from wsrep
were already part of PERFORMANCE_SCHEMA
threads were not.
Mutexes, condition variables, threads, and files from Galera library also were not part of the PERFORMANCE_SCHEMA
.
You can see the complete list of available instruments by running:
mysql> SELECT * FROM performance_schema.setup_instruments WHERE name LIKE '%galera%' OR name LIKE '%wsrep%';\n
Expected output +----------------------------------------------------------+---------+-------+\n| NAME | ENABLED | TIMED |\n+----------------------------------------------------------+---------+-------+\n| wait/synch/mutex/sql/LOCK_wsrep_ready | NO | NO |\n| wait/synch/mutex/sql/LOCK_wsrep_sst | NO | NO |\n| wait/synch/mutex/sql/LOCK_wsrep_sst_init | NO | NO |\n...\n| stage/wsrep/wsrep: in rollback thread | NO | NO |\n| stage/wsrep/wsrep: aborter idle | NO | NO |\n| stage/wsrep/wsrep: aborter active | NO | NO |\n+----------------------------------------------------------+---------+-------+\n73 rows in set (0.00 sec)\n
Some of the most important are:
Two main actions that Galera does are REPLICATION
and ROLLBACK
. Mutexes, condition variables, and threads related to this are part of PERFORMANCE_SCHEMA
.
Galera internally uses monitor mechanism to enforce ordering of events. These monitor control events apply and are mainly responsible for the wait between different action. All such monitor mutexes and condition variables are covered as part of this implementation.
There are lot of other miscellaneous action related to receiving of package and servicing messages. Mutexes and condition variables needed for them are now visible too. Threads that manage receiving and servicing are also being instrumented.
This feature has exposed all the important mutexes, condition variables that lead to lock/threads/files as part of this process.
Besides exposing file it also tracks write/read bytes like stats for file. These stats are not exposed for Galera files as Galera uses mmap
.
Also, there are some threads that are short-lived and created only when needed especially for SST/IST purpose. They are also tracked but come into PERFORMANCE_SCHEMA
tables only if/when they are created.
Stage Info
from Galera specific function which server updates to track state of running thread is also visible in PERFORMANCE_SCHEMA
.
Galera uses customer data-structure in some cases (like STL structures). Mutexes used for protecting these structures which are not part of mainline Galera logic or doesn\u2019t fall in big-picture are not tracked. Same goes with threads that are gcomm
library specific.
Galera maintains a process vector inside each monitor for its internal graph creation. This process vector is 65K in size and there are two such vectors per monitor. That is 128K * 3 = 384K condition variables. These are not tracked to avoid hogging PERFORMANCE_SCHEMA
limits and sidelining of the main crucial information.
pxc_cluster_view
","text":"The pxc_cluster_view
- provides a unified view of the cluster. The table is in the Performance_Schema database.
DESCRIBE pxc_cluster_view;\n
This table has the following definition:
Expected output+-------------+--------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------------+--------------+------+-----+---------+-------+\n| HOST_NAME | char(64) | NO | | NULL | |\n| UUID | char(36) | NO | | NULL | |\n| STATUS | char(64) | NO | | NULL | |\n| LOCAL_INDEX | int unsigned | NO | | NULL | |\n| SEGMENT | int unsigned | NO | | NULL | |\n+-------------+--------------+------+-----+---------+-------+\n5 rows in set (0.00 sec)\n
To view the table, run the following query:
SELECT * FROM pxc_cluster_view;\n
Expected output +-----------+--------------------------------------+--------+-------------+---------+\n| HOST_NAME | UUID | STATUS | LOCAL_INDEX | SEGMENT |\n+-----------+--------------------------------------+--------+-------------+---------+\n| node1 | 22b9d47e-c215-11eb-81f7-7ed65a9d253b | SYNCED | 0 | 0 |\n| node3 | 29c51cf5-c216-11eb-9101-1ba3a28e377a | SYNCED | 1 | 0 |\n| node2 | 982cdb03-c215-11eb-9865-0ae076a59c5c | SYNCED | 2 | 0 |\n+-----------+--------------------------------------+--------+-------------+---------+\n3 rows in set (0.00 sec)\n
"},{"location":"performance-schema-instrumentation.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"proxysql-v2.html","title":"ProxySQL admin utilities","text":"The ProxySQL and ProxySQL admin utilities documentation provides information on installing and running ProxySQL 1.x.x or ProxySQL 2.x.x with the following ProxySQL admin utilities:
The ProxySQL Admin simplifies the configuration of Percona XtraDB Cluster nodes with ProxySQL.
The Percona Scheduler Admin tool can automatically perform a failover due to node failures, service degradation, or maintenance.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"quickstart-overview.html","title":"Quickstart Guide for Percona XtraDB Cluster","text":"Percona XtraDB Cluster (PXC) is a 100% open source, enterprise-grade, highly available clustering solution for MySQL multi-master setups based on Galera. PXC helps enterprises minimize unexpected downtime and data loss, reduce costs, and improve performance and scalability of your database environments supporting your critical business applications in the most demanding public, private, and hybrid cloud environments.
"},{"location":"quickstart-overview.html#install-percona-xtradb-cluster","title":"Install Percona XtraDB Cluster","text":"You can install Percona XtraDB Cluster using different methods.
Percona Server for MySQL (PS) is a freely available, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior and optimized performance, greater scalability and availability, enhanced backups, increased visibility, and instrumentation. Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads.
Install Percona Server for MySQL.
"},{"location":"quickstart-overview.html#for-backups-and-restores","title":"For backups and restores","text":"Percona XtraBackup (PXB) is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL\u00ae that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems. Maintain fully available applications during planned maintenance windows with Percona XtraBackup.
Install Percona XtraBackup
"},{"location":"quickstart-overview.html#for-monitoring-and-management","title":"For Monitoring and Management","text":"Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details.
Install PMM and connect your MySQL instances to it.
"},{"location":"quickstart-overview.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"restarting-nodes.html","title":"Restart the cluster nodes","text":"To restart a cluster node, shut down MySQL and restarting it. The node should leave the cluster (and the total vote count for quorum should decrement).
When it rejoins, the node should synchronize using IST. If the set of changes needed for IST are not found in the gcache
file on any other node in the entire cluster, then SST will be performed instead. Therefore, restarting cluster nodes for rolling configuration changes or software upgrades is rather simple from the cluster\u2019s perspective.
Note
If you restart a node with an invalid configuration change that prevents MySQL from loading, Galera will drop the node\u2019s state and force an SST for that node.
Note
If MySQL fails for any reason, it will not remove its PID file (which is by design deleted only on clean shutdown). Obviously server will not restart if existing PID file is present. So in case of encountered MySQL failure for any reason with the relevant records in log, PID file should be removed manually.
"},{"location":"restarting-nodes.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"secure-network.html","title":"Secure the network","text":"By default, anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. This could potentially let them query your data or get a complete copy of it.
In general, it is a good idea to disable all remote connections to Percona XtraDB Cluster nodes. If you require clients or nodes from outside of your network to connect, you can set up a VPN (virtual private network) for this purpose.
"},{"location":"secure-network.html#firewall-configuration","title":"Firewall configuration","text":"A firewall can let you filter Percona XtraDB Cluster traffic based on the clients and nodes that you trust.
By default, Percona XtraDB Cluster nodes use the following ports:
3306 is used for MySQL client connections and SST (State Snapshot Transfer) via mysqldump
.
4444 is used for SST via Percona XtraBackup.
4567 is used for write-set replication traffic (over TCP) and multicast replication (over TCP and UDP).
4568 is used for IST (Incremental State Transfer).
Ideally you want to make sure that these ports on each node are accessed only from trusted IP addresses. You can implement packet filtering using iptables
, firewalld
, pf
, or any other firewall of your choice.
To restrict access to Percona XtraDB Cluster ports using iptables
, you need to append new rules to the INPUT
chain on the filter table. In the following example, the trusted range of IP addresses is 192.168.0.1/24. It is assumed that only Percona XtraDB Cluster nodes and clients will connect from these IPs. To enable packet filtering, run the commands as root on each Percona XtraDB Cluster node.
# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 3306 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 4444 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 4567 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 4568 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol udp --match udp --dport 4567 \\\n --source 192.168.0.1/24 --jump ACCEPT\n
Note
The last one opens port 4567 for multicast replication over UDP.
If the trusted IPs are not in sequence, you will need to run these commands for each address on each node. In this case, you can consider to open all ports between trusted hosts. This is a little bit less secure, but reduces the amount of commands. For example, if you have three Percona XtraDB Cluster nodes, you can run the following commands on each one:
# iptables --append INPUT --protocol tcp \\\n --source 64.57.102.34 --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n --source 193.166.3.20 --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n --source 193.125.4.10 --jump ACCEPT\n
Running the previous commands will allow TCP connections from the IP addresses of the other Percona XtraDB Cluster nodes.
Note
The changes that you make in iptables
are not persistent unless you save the packet filtering state:
# service save iptables\n
For distributions that use systemd
, you need to save the current packet filtering rules to the path where iptables
reads from when it starts. This path can vary by distribution, but it is usually in the /etc
directory. For example:
/etc/sysconfig/iptables
/etc/iptables/iptables.rules
Use iptables-save
to update the file:
# iptables-save > /etc/sysconfig/iptables\n
"},{"location":"secure-network.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"security-index.html","title":"Security basics","text":"By default, Percona XtraDB Cluster does not provide any protection for stored data. There are several considerations to take into account for securing Percona XtraDB Cluster:
Anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. You should consider restricting access using VPN and filter traffic on ports used by Percona XtraDB Cluster.
Unencrypted traffic can potentially be viewed by anyone monitoring your network. In Percona XtraDB Cluster 8.0 traffic encryption is enabled by default.
Percona XtraDB Cluster supports tablespace encryption to provide at-rest encryption for physical tablespace data files.
For more information, see the following blog post:
* [MySQL Data at Rest Encryption](https://www.percona.com/blog/2016/04/08/mysql-data-at-rest-encryption/)\n
"},{"location":"security-index.html#security-modules","title":"Security modules","text":"Most modern distributions include special security modules that control access to resources for users and applications. By default, these modules will most likely constrain communication between Percona XtraDB Cluster nodes.
The easiest solution is to disable or remove such programs, however, this is not recommended for production environments. You should instead create necessary security policies for Percona XtraDB Cluster.
"},{"location":"security-index.html#selinux","title":"SELinux","text":"SELinux is usually enabled by default in Red Hat Enterprise Linux and derivatives (including CentOS). SELinux helps protects the user\u2019s home directory data and provides the following:
Prevents unauthorized users from exploiting the system
Allows authorized users to access files
Used as a role-based access control system
To help with troubleshooting, during installation and configuration, you can set the mode to permissive
:
$ setenforce 0\n
Note
This action changes the mode only at runtime.
See also
For more information, see Enabling AppArmor
"},{"location":"security-index.html#apparmor","title":"AppArmor","text":"AppArmor is included in Debian and Ubuntu. Percona XtraDB Cluster contains several AppArmor profiles which allows for easier maintenance. To help with troubleshooting, during the installation and configuration, you can set the mode to complain
for mysqld
.
See also
For more information, see Enabling AppArmor
"},{"location":"security-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"selinux.html","title":"Enable SELinux","text":"SELinux helps protects the user\u2019s home directory data. SELinux provides the following:
Prevents unauthorized users from exploiting the system
Allows authorized users to access files
Used as a role-based access control system
For more information, see Percona Server and SELinux
Red Hat and CentOS distributes a policy module to extend the SELinux policy module for mysqld. We provide the following:
Extended module for pxc - an extension of the default module for mysqld distributed by the operating system.
wsrep-sst-xtrabackup-v2 - allows execution of the xtrabackup-v2 SST script
Modifications described in Percona Server and SELinux can also be applied for Percona XtraDB Cluster.
To adjust PXC-specific configurations, especially SST/IST ports, use the following procedures as root
:
To enable port 14567
instead of the default port 4567
:
Find the tag associated with the 4567
port:
$ semanage port -l | grep 4567\ntram_port_t tcp 4567\n
Run a command to find which rules grant mysqld access to the port:
$ sesearch -A -s mysqld_t -t tram_port_t -c tcp_socket\nFound 5 semantic av rules:\n allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n allow mysqld_t tram_port_t : tcp_socket { name_bind name_connect } ;\n allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n allow mysqld_t port_type : tcp_socket name_connect ;\n allow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n
You could tag port 14567 with the tramp_port_t
tag, but this tag may cause issues because port 14567 is not a TRAM port. Use the general mysqld_port_t
tag to add ports. For example, the following command adds port 14567 to the policy module with the mysqld_port_t
tag.
$ semanage port -a -t mysqld_port_t -p tcp 14567\n
You can verify the addition with the following command:
$ semanage port -l | grep 14567\nmysqld_port_t tcp 4568, 14567, 1186, 3306, 63132-63164\n
To see the tag associated with the 4444 port, run the following command:
$ semanage port -l | grep 4444\nkerberos_port_t tcp 88, 750, 4444\nkerberos_port_t udp 88, 750, 4444\n
To find the rules associated with kerberos_port_t
, run the following:
$ sesearch -A -s mysqld_t -t kerberos_port_t -c tcp_socket\nFound 9 semantic av rules:\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t rpc_port_type : tcp_socket name_bind ;\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t port_type : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket { recv_msg send_msg } ;\nallow nsswitch_domain reserved_port_type : tcp_socket name_connect ;\nallow mysqld_t reserved_port_type : tcp_socket name_connect ;\nallow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n
If you require port 14444 added, use the same method used to add port 14567.
If you must use a port that is already tagged, you can use either of the following ways:
Change the port tag to mysqld_port_t
Adjust the mysqld/sst script policy module to allow access to the given port. This method is better since all PXC-related adjustments are within the PXC-related policy modules.
pxc_encrypt_cluster_traffic
","text":"By default, the pxc_encrypt_cluster_traffic
is ON
, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.
Review How to set up the certificates. When SELinux is enabled, mysqld must have access to these certificates. The following items must be checked or considered:
Certificates inside /etc/mysql/certs/
directory must use the mysqld_etc_t
tag. This tag is applied automatically when the files are copied into the directory. When they are moved, the files retain their original context.
Certificates are accessible to the mysql user. The server certificates should be readable only by this user.
Certificates without the proper SELinux context can be restored with the following command:
$ restorecon -v /etc/mysql/certs/*\n
"},{"location":"selinux.html#enable-enforcing-mode-for-pxc","title":"Enable enforcing mode for PXC","text":"The process, mysqld, runs in permissive mode, by default, even if SELinux runs in enforcing mode:
$ semodule -l | grep permissive\npermissive_mysqld_t\npermissivedomains\n
After ensuring that the system journal does not list any issues, the administrator can remove the permissive mode for mysqld_t:
$ semanage permissive -d mysqld_t\n
See also
MariaDB 10.2 Galera Cluster with SELinux-enabled on CentOS 7
"},{"location":"selinux.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"set-up-3nodes-ec2.html","title":"How to set up a three-node cluster in EC2 environment","text":"This manual assumes you are running three EC2 instances with Red Hat Enterprise Linux 7 64-bit.
node1
: 10.93.46.58
node2
: 10.93.46.59
node3
: 10.93.46.60
Select instance types that support Enhanced Networking functionality. Good network performance critical for synchronous replication used in Percona XtraDB Cluster.
When adding instance storage volumes, choose the ones with good I/O performance:
instances with NVMe are preferred
GP2 SSD are preferred to GP3 SSD volume types due to I/O latency
over sized GP2 SSD are preferred to IO1 volume types due to cost
Attach Elastic network interfaces with static IPs or assign Elastic IP addresses to your instances. Thereby IP addresses are preserved on instances in case of reboot or restart. This is required as each Percona XtraDB Cluster member includes the wsrep_cluster_address
option in its configuration which points to other cluster members.
Launch instances in different availability zones to avoid cluster downtime in case one of the zones experiences power loss or network connectivity issues.
See also
Amazon EC2 Documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html
To set up Percona XtraDB Cluster:
Remove Percona XtraDB Cluster and Percona Server for MySQL packages for older versions:
Percona XtraDB Cluster 5.6, 5.7
Percona Server for MySQL 5.6, 5.7
Install Percona XtraDB Cluster as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS.
Create data directories:
$ mkdir -p /mnt/data\n$ mysql_install_db --datadir=/mnt/data --user=mysql\n
Stop the firewall service:
$ service iptables stop\n
Note
Alternatively, you can keep the firewall running, but open ports 3306, 4444, 4567, 4568. For example to open port 4567 on 192.168.0.1:
$ iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT\n
Create /etc/my.cnf
files:
Contents of the configuration file on the first node:
[mysqld]\ndatadir=/mnt/data\nuser=mysql\n\nbinlog_format=ROW\n\nwsrep_provider=/usr/lib64/libgalera_smm.so\nwsrep_cluster_address=gcomm://10.93.46.58,10.93.46.59,10.93.46.60\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node1\n\ninnodb_autoinc_lock_mode=2\n
For the second and third nodes change the following lines:
wsrep_node_name=node2\n\nwsrep_node_name=node3\n
Start and bootstrap Percona XtraDB Cluster on the first node:
[root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
Expected output 2014-01-30 11:52:35 23280 [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n
Start the second and third nodes:
$ sudo systemctl start mysql\n
Expected output ... [Note] WSREP: Flow-control interval: [28, 28]\n... [Note] WSREP: Restored state OPEN -> JOINED (2)\n... [Note] WSREP: Member 2 (percona1) synced with group.\n... [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n... [Note] WSREP: New cluster view: global state: 4827a206-876b-11e3-911c-3e6a77d54953:2, view# 7: Primary, number of nodes: 3, my index: 2, protocol version 2\n... [Note] WSREP: SST complete, seqno: 2\n... [Note] Plugin 'FEDERATED' is disabled.\n... [Note] InnoDB: The InnoDB memory heap is disabled\n... [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins\n... [Note] InnoDB: Compressed tables use zlib 1.2.3\n... [Note] InnoDB: Using Linux native AIO\n... [Note] InnoDB: Not using CPU crc32 instructions\n... [Note] InnoDB: Initializing buffer pool, size = 128.0M\n... [Note] InnoDB: Completed initialization of buffer pool\n... [Note] InnoDB: Highest supported file format is Barracuda.\n... [Note] InnoDB: 128 rollback segment(s) are active.\n... [Note] InnoDB: Waiting for purge to start\n... [Note] InnoDB: Percona XtraDB (http://www.percona.com) ... started; log sequence number 1626341\n... [Note] RSA private key file not found: /var/lib/mysql//private_key.pem. Some authentication plugins will not work.\n... [Note] RSA public key file not found: /var/lib/mysql//public_key.pem. Some authentication plugins will not work.\n... [Note] Server hostname (bind-address): '*'; port: 3306\n... [Note] IPv6 is available.\n... [Note] - '::' resolves to '::';\n... [Note] Server socket created on IP: '::'.\n... [Note] Event Scheduler: Loaded 0 events\n... [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n... [Note] WSREP: inited wsrep sidno 1\n... [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.\n... [Note] WSREP: REPL Protocols: 5 (3, 1)\n... [Note] WSREP: Assign initial position for certification: 2, protocol version: 3\n... [Note] WSREP: Service thread queue flushed.\n... [Note] WSREP: Synchronized with group, ready for connections\n
When all nodes are in SYNCED state, your cluster is ready.
You can try connecting to MySQL on any node and create a database:
$ mysql -uroot\n> CREATE DATABASE hello_tom;\n
The new database will be propagated to all nodes. If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"singlebox.html","title":"How to set up a three-node cluster on a single box","text":"This tutorial describes how to set up a 3-node cluster on a single physical box.
For the purposes of this tutorial, assume the following:
The local IP address is 192.168.2.21
.
Percona XtraDB Cluster is extracted from binary tarball into /usr/local/Percona-XtraDB-Cluster-8.0.x86_64
To set up the cluster:
Create three MySQL configuration files for the corresponding nodes:
/etc/my.4000.cnf
[mysqld]\nport = 4000\nsocket=/tmp/mysql.4000.sock\ndatadir=/data/bench/d1\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:5030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:4020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:4030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node4000\ninnodb_autoinc_lock_mode=2\n
/etc/my.5000.cnf
[mysqld]\nport = 5000\nsocket=/tmp/mysql.5000.sock\ndatadir=/data/bench/d2\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:5020\nwsrep_node_incoming_address=192.168.2.21\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:5030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node5000\ninnodb_autoinc_lock_mode=2\n
/etc/my.6000.cnf
[mysqld]\nport = 6000\nsocket=/tmp/mysql.6000.sock\ndatadir=/data/bench/d3\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:5030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:6020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:6030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node6000\ninnodb_autoinc_lock_mode=2\n
Create three data directories for the nodes:
/data/bench/d1
/data/bench/d2
/data/bench/d3
Start the first node using the following command (from the Percona XtraDB Cluster install directory):
$ bin/mysqld_safe --defaults-file=/etc/my.4000.cnf --wsrep-new-cluster\n
If the node starts correctly, you should see the following output:
Expected output111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)\n111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1\n
To check the ports, run the following command:
$ netstat -anp | grep mysqld\ntcp 0 0 192.168.2.21:4030 0.0.0.0:* LISTEN 21895/mysqld\ntcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 21895/mysqld\n
Start the second and third nodes:
$ bin/mysqld_safe --defaults-file=/etc/my.5000.cnf\n$ bin/mysqld_safe --defaults-file=/etc/my.6000.cnf\n
If the nodes start and join the cluster successful, you should see the following output:
111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)\n111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections\n
To check the cluster size, run the following command:
$ mysql -h127.0.0.1 -P6000 -e \"show global status like 'wsrep_cluster_size';\"\n
Expected output +--------------------+-------+\n| Variable_name | Value |\n+--------------------+-------+\n| wsrep_cluster_size | 3 |\n+--------------------+-------+\n
After that you can connect to any node and perform queries, which will be automatically synchronized with other nodes. For example, to create a database on the second node, you can run the following command:
$ mysql -h127.0.0.1 -P5000 -e \"CREATE DATABASE hello_peter\"\n
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"state-snapshot-transfer.html","title":"State snapshot transfer","text":"State Snapshot Transfer (SST) is a full data copy from one node (donor) to the joining node (joiner). It\u2019s used when a new node joins the cluster. In order to be synchronized with the cluster, the new node has to receive data from a node that is already part of the cluster.
Percona XtraDB Cluster enables via xtrabackup.
Xtrabackup SST uses backup locks, which means the Galera provider is not paused at all as with earlier. The SST method can be configured using the wsrep_sst_method
variable.
Note
If the gcs.sync_donor
variable is set to Yes
(default is No
), the whole cluster will get blocked if the donor is blocked by SST.
If there are no nodes available that can safely perform incremental state transfer (IST), the cluster defaults to SST.
If there are nodes available that can perform IST, the cluster prefers a local node over remote nodes to serve as the donor.
If there are no local nodes available that can perform IST, the cluster chooses a remote node to serve as the donor.
If there are several local and remote nodes that can perform IST, the cluster chooses the node with the highest seqno
to serve as the donor.
The default SST method is xtrabackup-v2
which uses Percona XtraBackup. This is the least blocking method that leverages backup locks. XtraBackup is run locally on the donor node.
The datadir needs to be specified in the server configuration file my.cnf
, otherwise the transfer process will fail.
Detailed information on this method is provided in Percona XtraBackup SST Configuration documentation.
"},{"location":"state-snapshot-transfer.html#sst-for-tables-with-tablespaces-that-are-not-in-the-data-directory","title":"SST for tables with tablespaces that are not in the data directory","text":"For example:
CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory';\n
"},{"location":"state-snapshot-transfer.html#sst-using-percona-xtrabackup","title":"SST using Percona XtraBackup","text":"XtraBackup will restore the table to the same location on the joiner node. If the target directory does not exist, it will be created. If the target file already exists, an error will be returned, because XtraBackup cannot clear tablespaces not in the data directory.
"},{"location":"state-snapshot-transfer.html#other-reading","title":"Other reading","text":"State Snapshot Transfer Methods for MySQL
Xtrabackup SST configuration
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"strict-mode.html","title":"Percona XtraDB Cluster strict mode","text":"The Percona XtraDB Cluster (PXC) Strict Mode is designed to avoid the use of tech preview features and unsupported features in PXC. This mode performs a number of validations at startup and during runtime.
Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:
DISABLED
: Do not perform strict mode validations and run as normal.
PERMISSIVE
: If a validation fails, log a warning and continue running as normal.
ENFORCING
: If a validation fails during startup, halt the server and throw an error. If a validation fails during runtime, deny the operation and throw an error.
MASTER
: The same as ENFORCING
except that the validation of explicit table locking is not performed. This mode can be used with clusters in which write operations are isolated to a single node.
By default, PXC Strict Mode is set to ENFORCING
, except if the node is acting as a standalone server or the node is bootstrapping, then PXC Strict Mode defaults to DISABLED
.
It is recommended to keep PXC Strict Mode set to ENFORCING
, because in this case whenever Percona XtraDB Cluster encounters a tech preview feature or an unsupported operation, the server will deny it. This will force you to re-evaluate your Percona XtraDB Cluster configuration without risking the consistency of your data.
If you are planning to set PXC Strict Mode to anything else than ENFORCING
, you should be aware of the limitations and effects that this may have on data integrity. For more information, see Validations.
To set the mode, use the pxc_strict_mode
variable in the configuration file or the --pxc-strict-mode
option during mysqld
startup.
Note
It is better to start the server with the necessary mode (the default ENFORCING
is highly recommended). However, you can dynamically change it during runtime. For example, to set PXC Strict Mode to PERMISSIVE
, run the following command:
mysql> SET GLOBAL pxc_strict_mode=PERMISSIVE;\n
Note
To further ensure data consistency, it is important to have all nodes in the cluster running with the same configuration, including the value of pxc_strict_mode
variable.
PXC Strict Mode validations are designed to ensure optimal operation for common cluster setups that do not require tech preview features and do not rely on operations not supported by Percona XtraDB Cluster.
Warning
If an unsupported operation is performed on a node with pxc_strict_mode
set to DISABLED
or PERMISSIVE
, it will not be validated on nodes where it is replicated to, even if the destination node has pxc_strict_mode
set to ENFORCING
.
This section describes the purpose and consequences of each validation.
"},{"location":"strict-mode.html#group-replication","title":"Group replication","text":"Group replication is a feature of MySQL that provides distributed state machine replication with strong coordination between servers. It is implemented as a plugin which, if activated, may conflict with PXC. Group replication cannot be activated to run alongside PXC. However, you can migrate to PXC from the environment that uses group replication.
For the strict mode to work correctly, make sure that the group replication plugin is not active. In fact, if pxc_strict_mode
is set to ENFORCING or MASTER, the server will stop with an error:
Error message with pxc_strict_mode
set to ENFORCING
or MASTER
Group replication cannot be used with PXC in strict mode.\n
If pxc_strict_mode
is set to DISABLED
you can use group replication at your own risk. Setting pxc_strict_mode
to PERMISSIVE
will result in a warning.
Warning message with pxc_strict_mode
set to PERMISSIVE
Using group replication with PXC is only supported for migration. Please\nmake sure that group replication is turned off once all data is migrated to PXC.\n
"},{"location":"strict-mode.html#storage-engine","title":"Storage engine","text":"Percona XtraDB Cluster currently supports replication only for tables that use a transactional storage engine (XtraDB or InnoDB). To ensure data consistency, the following statements should not be allowed for tables that use a non-transactional storage engine (MyISAM, MEMORY, CSV, and others):
Data manipulation statements that perform writing to table (for example, INSERT
, UPDATE
, DELETE
, etc.)
The following administrative statements: CHECK
, OPTIMIZE
, REPAIR
, and ANALYZE
TRUNCATE TABLE
and ALTER TABLE
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on an unsupported table.
ENFORCING
or MASTER
At startup, no validation is performed.
At runtime, any undesirable operation performed on an unsupported table is denied and an error is logged.
Note
Unsupported tables can be converted to use a supported storage engine.
"},{"location":"strict-mode.html#myisam-replication","title":"MyISAM replication","text":"Percona XtraDB Cluster provides support for replication of tables that use the MyISAM storage engine. The use of the MyISAM storage engine in a cluster is not recommended and if you use the storage engine, this is your own risk. Due to the non-transactional nature of MyISAM, the storage engine is not fully-supported in Percona XtraDB Cluster.
MyISAM replication is controlled using the wsrep_replicate_myisam
variable, which is set to OFF
by default. Due to its unreliability, MyISAM replication should not be enabled if you want to ensure data consistency.
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, you can set wsrep_replicate_myisam
to any value.
PERMISSIVE
At startup, if wsrep_replicate_myisam
is set to ON
, a warning is logged and startup continues.
At runtime, it is permitted to change wsrep_replicate_myisam
to any value, but if you set it to ON
, a warning is logged.
ENFORCING
or MASTER
At startup, if wsrep_replicate_myisam
is set to ON
, an error is logged and startup is aborted.
At runtime, any attempt to change wsrep_replicate_myisam
to ON
fails and an error is logged.
Note
The wsrep_replicate_myisam
variable controls replication for MyISAM tables, and this validation only checks whether it is allowed. Undesirable operations for MyISAM tables are restricted using the Storage engine validation.
Percona XtraDB Cluster supports only the default row-based binary logging format. In 8.0, setting the binlog_format variable to anything but ROW
at startup or runtime is not allowed regardless of the value of the pxc_strict_mode
variable.
Percona XtraDB Cluster cannot properly propagate certain write operations to tables that do not have primary keys defined. Undesirable operations include data manipulation statements that perform writing to table (especially DELETE
).
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on a table without an explicit primary key defined.
ENFORCING
or MASTER
At startup, no validation is performed.
At runtime, any undesirable operation performed on a table without an explicit primary key is denied and an error is logged.
"},{"location":"strict-mode.html#log-output","title":"Log output","text":"Percona XtraDB Cluster does not support tables in the MySQL database as the destination for log output. By default, log entries are written to file. This validation checks the value of the log_output variable.
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, you can set log_output
to any value.
PERMISSIVE
At startup, if log_output
is set only to TABLE
, a warning is logged and startup continues.
At runtime, it is permitted to change log_output
to any value, but if you set it only to TABLE
, a warning is logged.
ENFORCING
or MASTER
At startup, if log_output
is set only to TABLE
, an error is logged and startup is aborted.
At runtime, any attempt to change log_output
only to TABLE
fails and an error is logged.
Percona XtraDB Cluster provides only the tech-preview-level of support for explicit table locking operations, The following undesirable operations lead to explicit table locking and are covered by this validation:
LOCK TABLES
GET_LOCK()
and RELEASE_LOCK()
FLUSH TABLES <tables> WITH READ LOCK
Setting the SERIALIZABLE
transaction level
Depending on the selected mode, the following happens:
DISABLED
or MASTER
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed.
ENFORCING
At startup, no validation is performed.
At runtime, any undesirable operation is denied and an error is logged.
"},{"location":"strict-mode.html#auto-increment-lock-mode","title":"Auto-increment lock mode","text":"The lock mode for generating auto-increment values must be interleaved to ensure that each node generates a unique (but non-sequential) identifier.
This validation checks the value of the innodb_autoinc_lock_mode variable. By default, the variable is set to 1
(consecutive lock mode), but it should be set to 2
(interleaved lock mode).
Depending on the strict mode selected, the following happens:
DISABLED
At startup, no validation is performed.
PERMISSIVE
At startup, if innodb_autoinc_lock_mode
is not set to 2
, a warning is logged and startup continues.
ENFORCING
or MASTER
At startup, if innodb_autoinc_lock_mode
is not set to 2
, an error is logged and startup is aborted.
Note
This validation is not performed during runtime, because the innodb_autoinc_lock_mode
variable cannot be set dynamically.
With strict mode set to ENFORCING
, Percona XtraDB Cluster does not support statements, because they combine both schema and data changes. Note that tables in the SELECT clause should be present on all replication nodes.
With strict mode set to PERMISSIVE
or DISABLED
, CREATE TABLE \u2026 AS SELECT (CTAS) statements are replicated using the method to ensure consistency.
In Percona XtraDB Cluster 5.7, CREATE TABLE \u2026 AS SELECT (CTAS) statements were replicated using DML write-sets when strict mode was set to PERMISSIVE
or DISABLED
.
Important
MyISAM tables are created and loaded even if wsrep_replicate_myisam
equals to 1. Percona XtraDB Cluster does not recommend using the MyISAM storage engine. The support for MyISAM may be removed in a future release.
See also
MySQL Bug System: XID inconsistency on master-slave with CTAS https://bugs.mysql.com/bug.php?id=93948
Depending on the strict mode selected, the following happens:
Mode Behavior DISABLED At startup, no validation is performed. At runtime, all operations are permitted. PERMISSIVE At startup, no validation is performed. At runtime, all operations are permitted, but a warning is logged when a CREATE TABLE \u2026 AS SELECT (CTAS) operation is performed. ENFORCING At startup, no validation is performed. At runtime, any CTAS operation is denied and an error is logged.Important
Although CREATE TABLE \u2026 AS SELECT (CTAS) operations for temporary tables are permitted even in STRICT
mode, temporary tables should not be used as source tables in CREATE TABLE \u2026 AS SELECT (CTAS) operations due to the fact that temporary tables are not present on all nodes.
If node-1
has a temporary and a non-temporary table with the same name, CREATE TABLE \u2026 AS SELECT (CTAS) on node-1
will use temporary and CREATE TABLE \u2026 AS SELECT (CTAS) on node-2
will use the non-temporary table resulting in a data level inconsistency.
DISCARD TABLESPACE
and IMPORT TABLESPACE
are not replicated using TOI. This can lead to data inconsistency if executed on only one node.
Depending on the strict mode selected, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when you discard or import a tablespace.
ENFORCING
At startup, no validation is performed.
At runtime, discarding or importing a tablespace is denied and an error is logged.
"},{"location":"strict-mode.html#major-version-check","title":"Major version check","text":"This validation checks that the protocol version is the same as the server major version. This validation protects the cluster against writes attempted on already upgraded nodes.
Expected outputERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of multiple major versions while accepting write workload with pxc_strict_mode = ENFORCING or MASTER\n
"},{"location":"strict-mode.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"tarball.html","title":"Install Percona XtraDB Cluster from Binary Tarball","text":"Percona provides generic tarballs with all required files and binaries for manual installation.
You can download the appropriate tarball package from https://www.percona.com/downloads/Percona-XtraDB-Cluster-80
"},{"location":"tarball.html#version-updates","title":"Version updates","text":"Starting with Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section lists only full or minimal tar files. Each tarball file replaces the multiple tar file listing used in earlier versions and supports all distributions.
Important
Starting with Percona XtraDB Cluster 8.0.21, Percona does not provide a tarball for RHEL 6/CentOS 6 (glibc2.12).
The version number in the tarball name must be substituted with the appropriate version number for your system. To indicate that such a substitution is needed in statements, we use <version-number>
.
For installations before Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section contains multiple tarballs based on the operating system names:
Percona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.bionic.tar.gz\nPercona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.buster.tar.gz\n...\n
For example, you can use curl
as follows:
$ curl -O https://downloads.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-8.0.27/binary/tarball/Percona-XtraDB-Cluster_8.0.27-18.1_Linux.x86_64.glibc2.17-minimal.tar.gz\n
Check your system to make sure the packages that the PXC version requires are installed.
"},{"location":"tarball.html#for-debian-or-ubuntu","title":"For Debian or Ubuntu:","text":"$ sudo apt-get install -y \\\nsocat libdbd-mysql-perl \\\nlibaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \\\nlibgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1\n
"},{"location":"tarball.html#for-red-hat-enterprise-linux-or-centos","title":"For Red Hat Enterprise Linux or CentOS:","text":"$ sudo yum install -y openssl socat \\\nprocps-ng chkconfig procps-ng coreutils shadow-utils \\\n
"},{"location":"tarball.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"telemetry.html","title":"Telemetry on Percona XtraDB Cluster","text":"Percona telemetry fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer to not share this information.
"},{"location":"telemetry.html#what-information-is-collected","title":"What information is collected","text":"At this time, telemetry is added only to the Percona packages and Docker images. Percona XtraDB Cluster collects only information about the installation environment. Future releases may add additional metrics.
Be assured that access to this raw data is rigorously controlled. Percona does not collect personal data. All data is anonymous and cannot be traced to a specific user. To learn more about our privacy practices, read our Percona Privacy statement.
An example of the data collected is the following:
[{\"id\" : \"c416c3ee-48cd-471c-9733-37c2886f8231\",\n\"product_family\" : \"PRODUCT_FAMILY_PXC\",\n\"instanceId\" : \"6aef422e-56a7-4530-af9d-94cc02198343\",\n\"createTime\" : \"2023-10-16T10:46:23Z\",\n\"metrics\":\n[{\"key\" : \"deployment\",\"value\" : \"PACKAGE\"},\n{\"key\" : \"pillar_version\",\"value\" : \"8.0.34-26\"},\n{\"key\" : \"OS\",\"value\" : \"Oracle Linux Server 8.8\"},\n{\"key\" : \"hardware_arch\",\"value\" : \"x86_64 x86_64\"}]}]\n
"},{"location":"telemetry.html#disable-telemetry","title":"Disable telemetry","text":"Starting with Percona XtraDB Cluster 8.0.34-26-1, telemetry is enabled by default. If you decide not to send usage data to Percona, you can set the PERCONA_TELEMETRY_DISABLE=1
environment variable for either the root user or in the operating system prior to the installation process.
Add the environment variable before the install process.
$ sudo PERCONA_TELEMETRY_DISABLE=1 apt install percona-xtradb-cluster\n
Add the environment variable before the install process.
$ sudo PERCONA_TELEMETRY_DISABLE=1 yum install percona-xtradb-cluster\n
Add the environment variable when running a command in a new container.
$ docker run -d -e MYSQL_ROOT_PASSWORD=test1234# -e PERCONA_TELEMETRY_DISABLE=1 -e CLUSTER_NAME=pxc-cluster1 --name=pxc-node1 percona/percona-xtradb-cluster:8.0\n
"},{"location":"telemetry.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"threading-model.html","title":"Percona XtraDB Cluster threading model","text":"Percona XtraDB Cluster creates a set of threads to service its operations, which are not related to existing MySQL threads. There are three main groups of threads:
"},{"location":"threading-model.html#applier-threads","title":"Applier threads","text":"Applier threads apply write-sets that the node receives from other nodes. Write messages are directed through gcv_recv_thread
.
The number of applier threads is controlled using the wsrep_slave_threads
variable or the wsrep_applier_threads
variable. The wsrep_slave_threads
variable was deprecated in the Percona XtraDB Cluster 8.0.26-16 release. The default value is 1
, which means at least one wsrep applier thread exists to process the request.
Applier threads wait for an event, and once it gets the event, it applies it using normal replica apply routine path, and relays the log info apply path with wsrep-customization. These threads are similar to replica worker threads (but not exactly the same).
Coordination is achieved using Apply and Commit Monitor. A transaction passes through two important states: APPLY
and COMMIT
. Every transaction registers itself with an apply monitor, where its apply order is defined. So all transactions with apply order sequence number (seqno
) of less than this transaction\u2019s sequence number, are applied before applying this transaction. The same is done for commit as well (last_left >= trx_.depends_seqno()
).
There is only one rollback thread to perform rollbacks in case of conflicts.
Transactions executed in parallel can conflict and may need to roll back.
Applier transactions always take priority over local transactions. This is natural, as applier transactions have been accepted by the cluster, and some of the nodes may have already applied them. Local conflicting transactions still have a window to rollback.
All the transactions that need to be rolled back are added to the rollback queue, and the rollback thread is notified. The rollback thread then iterates over the queue and performs rollback operations.
If a transaction is active on a node, and a node receives a transaction write-set from the cluster group that conflicts with the local active transaction, then such local transactions are always treated as a victim transaction to roll back.
Transactions can be in a commit state or an execution stage when the conflict arises. Local transactions in the execution stage are forcibly killed so that the waiting applier transaction is allowed to proceed. Local transactions in the commit stage fail with a certification error.
"},{"location":"threading-model.html#other-threads","title":"Other threads","text":""},{"location":"threading-model.html#service-thread","title":"Service thread","text":"This thread is created during boot-up and used to perform auxiliary services. It has two main functions:
It releases the GCache buffer after the cached write-set is purged up to the said level.
It notifies the cluster group that the respective node has committed a transaction up to this level. Each node maintains some basic status info about other nodes in the cluster. On receiving the message, the information is updated in this local metadata.
The gcs_recv_thread
thread is the first one to see all the messages received in a group.
It will try to assign actions against each message it receives. It adds these messages to a central FIFO queue, which are then processed by the Applier threads. Messages can include different operations like state change, configuration update, flow-control, and so on.
One important action is processing a write-set, which actually is applying transactions to database objects.
"},{"location":"threading-model.html#gcomm-connection-thread","title":"Gcomm connection thread","text":"The gcomm connection thread GCommConn::run_fn
is used to co-ordinate the low-level group communication activity. Think of it as a black box meant for communication.
Besides the above, some threads are created on a needed basis. SST creates threads for donor and joiner (which eventually forks out a child process to host the needed SST script), IST creates receiver and async sender threads, PageStore creates a background thread for removing the files that were created.
If the checksum is enabled and the replicated write-set is big enough, the checksum is done as part of a separate thread.
"},{"location":"threading-model.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"trademark-policy.html","title":"Trademark policy","text":"This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.
Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.
Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.
First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.
Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.
Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.
Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.
Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.
In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.
In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.
"},{"location":"trademark-policy.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"upgrade-from-backup.html","title":"Restore a 5.7 backup to an 8.0 cluster","text":"Use Percona XtraBackup to back up the source server data and restore the data to a target server, and then upgrade the server to a different version of Percona XtraDB Cluster.
Downgrading is not supported.
"},{"location":"upgrade-from-backup.html#restore-a-database-with-a-different-server-version","title":"Restore a database with a different server version","text":"Review Upgrade Percona XtraDB cluster.
Upgrade the nodes one at a time. The primary node should be the last node to be upgraded. The following steps are required on each node.
Back up the data on the source server.
Install the same database version as the source server on the target server.
Restore with a copy-back
operation on the target server.
Start the database server on the target server.
Do a slow shutdown of the database server with the SET GLOBAL innodb_fast_shutdown=0
statement. This shutdown type flushes InnoDB operations before completing and may take longer.
Install the new database server version on the target server.
Start the new database server version on the restored data directory.
Perform any other upgrade steps as necessary.
To ensure the upgrade was successful, check the data.
"},{"location":"upgrade-from-backup.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"upgrade-guide.html","title":"Upgrade Percona XtraDB Cluster","text":"The following documents contain details about relevant changes in the 8.0 series of MySQL and Percona Server for MySQL. Make sure you deal with any incompatible features and variables mentioned in these documents when upgrading to Percona XtraDB Cluster 8.0.
Upgrading MySQL
Upgrading from MySQL 5.7 to 8.0
The pxc_encrypt_cluster_traffic
variable, which enables traffic encryption, is set to ON
by default in Percona XtraDB Cluster 8.0.
Unless you configure a node accordingly (each node in your cluster must use the same SSL certificates) or try to join a cluster running PXC 5.7 which unencrypted cluster traffic, the node will not be able to join resulting in an error.
The error message... [ERROR] ... [Galera] handshake with remote endpoint ...\nThis error is often caused by SSL issues. ...\n
See also
sections Encrypting PXC Traffic, Configuring Nodes for Write-Set Replication
"},{"location":"upgrade-guide.html#not-recommended-to-mix-pxc-57-nodes-with-pxc-80-nodes","title":"Not recommended to mix PXC 5.7 nodes with PXC 8.0 nodes","text":"Shut down the cluster and upgrade each node to PXC 8.0. It is important that you make backups before attempting an upgrade.
"},{"location":"upgrade-guide.html#pxc-strict-mode-is-enabled-by-default","title":"PXC strict mode is enabled by default","text":"Percona XtraDB Cluster in 8.0 runs with PXC Strict Mode enabled by default. This will deny any unsupported operations and may halt the server if a strict mode validation fails. It is recommended to first start the node with the pxc_strict_mode
variable set to PERMISSIVE
in the MySQL configuration file.
All configuration settings are stored in the default MySQL configuration file:
Path on Debian and Ubuntu: /etc/mysql/mysql.conf.d/mysqld.cnf
Path on Red Hat and CentOS: /etc/my.cnf
After you check the log for any tech preview features or unsupported features and you have fixed any of the encountered incompatibilities, set the variable back to ENFORCING
at run time:
mysql> SET pxc_strict_mode=ENFORCING;\n
Restarting the node with the updated configuration file also sets variable to ENFORCING
.
All configuration settings are stored in the default MySQL configuration file:
Path on Debian and Ubuntu: /etc/mysql/mysql.conf.d/mysqld.cnf
Path on Red Hat and CentOS: /etc/my.cnf
Before you start the upgrade, move your custom settings from /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf
(on Debian and Ubuntu) or from /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
(on Red Hat and CentOS) to the new location accordingly.
Note
If you have moved your my.cnf file to a different location and added a symlink to /etc/my.cnf
, the RPM package manager, when upgrading, can delete the symlink and put a default my.cnf file in /etc/.
In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password
. The ProxySQL option \u2013syncusers will not work if the Percona XtraDB Cluster user is created using caching_sha2_password
. Use the mysql_native_password
authentication plugin in these cases.
Be sure you are running on the latest 5.7 version before you upgrade to 8.0.
"},{"location":"upgrade-guide.html#mysql_upgrade-is-part-of-sst","title":"mysql_upgrade is part of SST","text":"mysql_upgrade is now run automatically as part of SST. You do not have to run it manually when upgrading your system from an older version.
"},{"location":"upgrade-guide.html#major-upgrade-scenarios","title":"Major upgrade scenarios","text":"Upgrading PXC from 5.7 to 8.0 may have slightly different strategies depending on the configuration and workload on your PXC cluster.
Note that the new default value of pxc-encrypt-cluster-traffic
(set to ON versus OFF in PXC 5.7) requires additional care. You cannot join a 5.7 node to a PXC 8.0 cluster unless the node has traffic encryption enabled as the cluster may not have some nodes with traffic encryption enabled and some nodes with traffic encryption disabled. For more information, see Traffic encryption is enabled by default.
If there is no active parallel workload or the cluster has read-only workload while upgrading the nodes, complete the following procedure for each node in the cluster:
Shutdown one of the node 5.7 cluster nodes.
Remove 5.7 PXC packages without removing the data-directory.
Install PXC 8.0 packages.
Restart the mysqld service.
Important
Before upgrading, make sure your application can work with a reduced cluster size. If the cluster operates with an even number of nodes, the cluster may have split-brain.
This upgrade flow auto-detects the presence of the 5.7 data directory and trigger the upgrade as part of the node bootup process. The data directory is upgraded to be compatible with PXC 8.0. Then the node joins the cluster and enters synced state. The 3-node cluster is restored with 2 nodes running PXC 5.7 and 1 node running PXC 8.0.
Note
Since SST is not involved, SST based auto-upgrade flow is not started.
PXC 8.0 uses Galera 4 while PXC 5.7 uses Galera-3. The cluster will continue to use the protocol version 3 used in Galera 3 effectively limiting some of the functionality. With all nodes upgraded to version 8.0, protocol version 4 is applied.
Tip
The protocol version is stored in the protocol_version
column of the wsrep_cluster
table.
mysql> USE mysql;\n
mysql> SELECT protocol_version from wsrep_cluster;\n
The example of the output is the following:
+------------------+\n| protocol_version |\n+------------------+\n| 4 |\n+------------------+\n1 row in set (0.00 sec)\n
As soon as the last 5.7 node shuts down, the configuration of the remaining two nodes is updated to use protocol version 4. A new upgraded node will then join using protocol version 4 and the whole cluster will maintain protocol version 4 enabling the support for additional Galera 4 facilities.
It may take longer to join the last upgraded node since it will invite IST to obtain the configuration changes.
Note
Starting from Galera 4, the configuration changes are cached to gcache
and the configuration changes are donated as part of IST or SST to help build the certification queue on the JOINING node. As other nodes (say n2 and n3), already using protocol version 4, donate the configuration changes when the JOINER node is booted.
The situation was different for the previous and penultimate nodes since the donation of the configuration changes is not supported by protocol version 3 that they used.
With IST involved on joining the last node, the smart IST flow is triggered to take care of the upgrade even before MySQL starts to look at the data directory.
Important
It is not recommended to restart the last node without upgrading it.
"},{"location":"upgrade-guide.html#scenario-upgrade-from-pxc-56-to-pxc-80","title":"Scenario: Upgrade from PXC 5.6 to PXC 8.0","text":"First, upgrade PXC from 5.6 to the latest version of PXC 5.7. Then proceed with the upgrade using the procedure described in Scenario: No active parallel workload or with read-only workload.
"},{"location":"upgrade-guide.html#minor-upgrade","title":"Minor upgrade","text":"To upgrade the cluster, follow these steps for each node:
Make sure that all nodes are synchronized.
Stop the mysql
service:
$ sudo service mysql stop\n
Upgrade Percona XtraDB Cluster and Percona XtraBackup packages. For more information, see Installing Percona XtraDB Cluster.
Back up grastate.dat
, so that you can restore it if it is corrupted or zeroed out due to network issue.
Now, start the cluster node with 8.0 packages installed, PXC will upgrade the data directory as needed - either as part of the startup process or a state transfer (IST/SST).
In most cases, starting the mysql
service should run the node with your previous configuration. For more information, see Adding Nodes to Cluster.
$ sudo service mysql start\n
Note
On CentOS, the /etc/my.cnf configuration file is renamed to my.cnf.rpmsave
. Make sure to rename it back before joining the upgraded node back to the cluster.
PXC Strict Mode is enabled by default, which may result in denying any unsupported operations and may halt the server. For more information, see pxc-strict-mode is enabled by default.
pxc-encrypt-cluster-traffic
is enabled by default. You need to configure each node accordingly and avoid joining a cluster with unencrypted cluster traffic. For more information, see Traffic encryption is enabled by default.
Repeat this procedure for the next node in the cluster until you upgrade all nodes.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"verify-replication.html","title":"Verify replication","text":"Use the following procedure to verify replication by creating a new database on the second node, creating a table for that database on the third node, and adding some records to the table on the first node.
Create a new database on the second node:
mysql@pxc2> CREATE DATABASE percona;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Switch to a newly created database:
mysql@pxc3> USE percona;\n
The following output confirms that a database has been changed:
Expected outputDatabase changed\n
Create a table on the third node:
mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n
The following output confirms that a table has been created:
Expected outputQuery OK, 0 rows affected (0.05 sec)\n
Insert records on the first node:
mysql@pxc1> INSERT INTO percona.example VALUES (1, 'percona1');\n
The following output confirms that the records have been inserted:
Expected outputQuery OK, 1 row affected (0.02 sec)\n
Retrieve rows from that table on the second node:
mysql@pxc2> SELECT * FROM percona.example;\n
The following output confirms that all the rows have been retrieved:
Expected output+---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n| 1 | percona1 |\n+---------+-----------+\n1 row in set (0.00 sec)\n
Consider installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries. This is the recommended high-availability solution for Percona XtraDB Cluster. For more information, see Load balancing with ProxySQL.
Percona Monitoring and Management is the best choice for managing and monitoring Percona XtraDB Cluster performance. It provides visibility for the cluster and enables efficient troubleshooting.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"virtual-sandbox.html","title":"Set up a testing environment with ProxySQL","text":"This section describes how to set up Percona XtraDB Cluster in a virtualized testing environment based on ProxySQL. To test the cluster, we will use the sysbench benchmark tool.
It is assumed that each PXC node is installed on Amazon EC2 micro instances running CentOS 7. However, the information in this section should apply if you used another virtualization technology (for example, VirtualBox) with any Linux distribution.
Each of the tree Percona XtraDB Cluster nodes is installed on a separate virtual machine. One more virtual machine has ProxySQL, which redirects requests to the nodes.
Tip
Running ProxySQL on an application server, instead of having it as a dedicated entity, removes the unnecessary extra network roundtrip, because the load balancing layer in Percona XtraDB Cluster scales well with application servers.
Install Percona XtraDB Cluster on three cluster nodes, as described in Configuring Percona XtraDB Cluster on CentOS.
On the client node, install ProxySQL and sysbench
:
$ yum -y install proxysql2 sysbench\n
When all cluster nodes are started, configure ProxySQL using the admin interface.
Tip
To connect to the ProxySQL admin interface, you need a mysql
client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql
client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally.
To connect to the admin interface, use the credentials, host name and port specified in the global variables.
Warning
Do not use default credentials in production!
The following example shows how to connect to the ProxySQL admin interface with default credentials (assuming that ProxySQL IP is 192.168.70.74):
root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n
To see the ProxySQL databases and tables use the SHOW DATABASES
and SHOW TABLES
commands:
mysql> SHOW DATABASES;\n
The following output shows the list of the ProxySQL databases:
Expected output+-----+---------------+-------------------------------------+\n| seq | name | file |\n+-----+---------------+-------------------------------------+\n| 0 | main | |\n| 2 | disk | /var/lib/proxysql/proxysql.db |\n| 3 | stats | |\n| 4 | monitor | |\n| 5 | stats_monitor | /var/lib/proxysql/proxysql_stats.db |\n+-----+---------------+-------------------------------------+\n5 rows in set (0.00 sec)\n
mysql> SHOW TABLES;\n
The following output shows the list of tables:
Expected output+----------------------------------------------------+\n| tables |\n+----------------------------------------------------+\n| global_variables |\n| mysql_aws_aurora_hostgroups |\n| mysql_collations |\n| mysql_firewall_whitelist_rules |\n| mysql_firewall_whitelist_sqli_fingerprints |\n| mysql_firewall_whitelist_users |\n| mysql_galera_hostgroups |\n| mysql_group_replication_hostgroups |\n| mysql_query_rules |\n| mysql_query_rules_fast_routing |\n| mysql_replication_hostgroups |\n| mysql_servers |\n| mysql_users |\n| proxysql_servers |\n| restapi_routes |\n| runtime_checksums_values |\n| runtime_global_variables |\n| runtime_mysql_aws_aurora_hostgroups |\n| runtime_mysql_firewall_whitelist_rules |\n| runtime_mysql_firewall_whitelist_sqli_fingerprints |\n| runtime_mysql_firewall_whitelist_users |\n| runtime_mysql_galera_hostgroups |\n| runtime_mysql_group_replication_hostgroups |\n| runtime_mysql_query_rules |\n| runtime_mysql_query_rules_fast_routing |\n| runtime_mysql_replication_hostgroups |\n| runtime_mysql_servers |\n| runtime_mysql_users |\n| runtime_proxysql_servers |\n| runtime_restapi_routes |\n| runtime_scheduler |\n| scheduler |\n+----------------------------------------------------+\n32 rows in set (0.00 sec)\n
For more information about admin databases and tables, see Admin Tables
Note
ProxySQL has 3 areas where the configuration can reside:
MEMORY (your current working place)
RUNTIME (the production settings)
DISK (durable configuration, saved inside an SQLITE database)
When you change a parameter, you change it in MEMORY area. That is done by design to allow you to test the changes before pushing to production (RUNTIME), or saving them to disk.
To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.71',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.72',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.73',10,3306,1000);\n
ProxySQL v2.0 supports PXC natlively. It uses the concept of hostgroups (see the value of hostgroup_id in the mysql_servers table) to group cluster nodes to balance the load in a cluster by routing different types of traffic to different groups.
This information is stored in the [runtime_]mysql_galera_hostgroups table.
Columns of the [runtime_]mysql_galera_hostgroups
table
1
(Yes) to inidicate that this configuration should be used; 0
(No) - otherwise max_writers The maximum number of WRITER nodes that must operate simultaneously. For most cases, a reasonable value is 1
. The value in this column may not exceed the total number of nodes. writer_is_also_reader 1
(Yes) to keep the given node in both reader_hostgroup
and writer_hostgroup
. 0
(No) to remove the given node from reader_hostgroup
if it already belongs to writer_hostgroup
. max_transactions_behind As soon as the value of :variable:wsrep_local_recv_queue
exceeds the number stored in this column the given node is set to OFFLINE
. Set the value carefully based on the behaviour of the node. comment Helpful extra information about the given node Make sure that the variable mysql-server_version refers to the correct version. For Percona XtraDB Cluster 8.0, set it to 8.0 accordingly:
mysql> UPDATE GLOBAL_VARIABLES\nSET variable_value='8.0'\nWHERE variable_name='mysql-server_version';\n\nmysql> LOAD MYSQL SERVERS TO RUNTIME;\nmysql> SAVE MYSQL SERVERS TO DISK;\n
See also
Percona Blogpost: ProxySQL Native Support for Percona XtraDB Cluster (PXC) https://www.percona.com/blog/2019/02/20/proxysql-native-support-for-percona-xtradb-cluster-pxc/
Given the nodes from the mysql_servers table, you may set up the hostgroups as follows:
mysql> INSERT INTO mysql_galera_hostgroups (\nwriter_hostgroup, backup_writer_hostgroup, reader_hostgroup,\noffline_hostgroup, active, max_writers, writer_is_also_reader,\nmax_transactions_behind)\nVALUES (10, 12, 11, 13, 1, 1, 2, 100);\n
This command configures ProxySQL as follows:
WRITER hostgroup
hostgroup `10`\n
READER hostgroup
hostgroup `11`\n
BACKUP WRITER hostgroup
hostgroup `12`\n
OFFLINE hostgroup
hostgroup `13`\n
Set up ProxySQL query rules for read/write split using the mysql_query_rules table:
mysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',10,1,'^SELECT.*FOR UPDATE',1);\n\nmysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',11,1,'^SELECT ',1);\n\nmysql> LOAD MYSQL QUERY RULES TO RUNTIME;\nmysql> SAVE MYSQL QUERY RULES TO DISK;\n\nmysql> select hostgroup_id,hostname,port,status,weight from runtime_mysql_servers;\n
Expected output +--------------+----------------+------+--------+--------+\n| hostgroup_id | hostname | port | status | weight |\n+--------------+----------------+------+--------+--------+\n| 10 | 192.168.70.73 | 3306 | ONLINE | 1000 |\n| 11 | 192.168.70.72 | 3306 | ONLINE | 1000 |\n| 11 | 192.168.70.71 | 3306 | ONLINE | 1000 |\n| 12 | 192.168.70.72 | 3306 | ONLINE | 1000 |\n| 12 | 192.168.70.71 | 3306 | ONLINE | 1000 |\n+--------------+----------------+------+--------+--------+\n5 rows in set (0.00 sec)\n
See also
ProxySQL Blog: MySQL read/write split with ProxySQL https://proxysql.com/blog/configure-read-write-split/ ProxySQL Documentation: mysql_query_rules
table https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules
Notice that all servers were inserted into the mysql_servers table with the READER hostgroup set to 10 (see the value of the hostgroup_id column):
mysql> SELECT * FROM mysql_servers;\n
Expected output +--------------+---------------+------+--------+ +---------+\n| hostgroup_id | hostname | port | weight | ... | comment |\n+--------------+---------------+------+--------+ +---------+\n| 10 | 192.168.70.71 | 3306 | 1000 | | |\n| 10 | 192.168.70.72 | 3306 | 1000 | | |\n| 10 | 192.168.70.73 | 3306 | 1000 | | |\n+--------------+---------------+------+--------+ +---------+\n3 rows in set (0.00 sec)\n
This configuration implies that ProxySQL elects the writer automatically. If the elected writer goes offline, ProxySQL assigns another (failover). You might tweak this mechanism by assigning a higher weight to a selected node. ProxySQL directs all write requests to this node. However, it also becomes the mostly utilized node for reading requests. In case of a failback (a node is put back online), the node with the highest weight is automatically elected for write requests.
"},{"location":"virtual-sandbox.html#creating-a-proxysql-monitoring-user","title":"Creating a ProxySQL monitoring user","text":"To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE
privilege on any node in the cluster and configure the user in ProxySQL.
The following example shows how to add a monitoring user on Node 2:
mysql> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password BY 'ProxySQLPa55';\nmysql> GRANT USAGE ON *.* TO 'proxysql'@'%';\n
The following example shows how to configure this user on the ProxySQL node:
mysql> UPDATE global_variables SET variable_value='proxysql'\nWHERE variable_name='mysql-monitor_username';\n\nmysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\nWHERE variable_name='mysql-monitor_password';\n
"},{"location":"virtual-sandbox.html#saving-and-loading-the-configuration","title":"Saving and loading the configuration","text":"To load this configuration at runtime, issue the LOAD
command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue the SAVE
command.
mysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql> SAVE MYSQL VARIABLES TO DISK;\n
To ensure that monitoring is enabled, check the monitoring logs:
mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+----------------------+---------------+\n| hostname | port | time_start_us | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695 | NULL |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779 | NULL |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627 | NULL |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557 | NULL |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737 | NULL |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447 | NULL |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+-------------------+------------+\n| hostname | port | time_start_us | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948 | NULL |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803 | NULL |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711 | NULL |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783 | NULL |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631 | NULL |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542 | NULL |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n
The previous examples show that ProxySQL is able to connect and ping the nodes you added.
To enable monitoring of these nodes, load them at runtime:
mysql> LOAD MYSQL SERVERS TO RUNTIME;\n
"},{"location":"virtual-sandbox.html#creating-proxysql-client-user","title":"Creating ProxySQL Client User","text":"ProxySQL must have users that can access backend nodes to manage connections.
To add a user, insert credentials into mysql_users
table:
mysql> INSERT INTO mysql_users (username,password) VALUES ('appuser','$3kRetp@$sW0rd');\n
The example of the output is the following:
Expected outputQuery OK, 1 row affected (0.00 sec)\n
Note
ProxySQL currently doesn\u2019t encrypt passwords.
See also
More information about password encryption in ProxySQL
Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):
mysql> LOAD MYSQL USERS TO RUNTIME;\nmysql> SAVE MYSQL USERS TO DISK;\n
To confirm that the user has been set up correctly, you can try to log in:
root@proxysql:~# mysql -u appuser -p$3kRetp@$sW0rd -h 127.0.0.1 -P 6033\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n
To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:
mysql> CREATE USER 'appuser'@'192.168.70.74'\nIDENTIFIED WITH mysql_native_password by '$3kRetp@$sW0rd';\n\nmysql> GRANT ALL ON *.* TO 'appuser'@'192.168.70.74';\n
"},{"location":"virtual-sandbox.html#testing-the-cluster-with-the-sysbench-benchmark-tool","title":"Testing the cluster with the sysbench benchmark tool","text":"After you set up Percona XtraDB Cluster in your testing environment, you can test it using the sysbench
benchmarking tool.
Create a database (sysbenchdb in this example; you can use a different name):
mysql> CREATE DATABASE sysbenchdb;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Populate the table with data for the benchmark. Note that you should pass the database you have created as the value of the --mysql-db
parameter, and the name of the user who has full access to this database as the value of the --mysql-user
parameter:
$ sysbench /usr/share/sysbench/oltp_insert.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--table-size=1000 prepare\n
Run the benchmark on port 6033:
$ sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--skip-trx=true --table-size=1000 --time=100 --report-interval=10 run\n
Related sections and additional reading
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-files-index.html","title":"Index of files created by PXC","text":"GRA_\\*.log
These files contain binlog events in ROW format representing the failed transaction. That means that the replica thread was not able to apply one of the transactions. For each of those file, a corresponding warning or error message is present in the mysql error log file. Those error can also be false positives like a bad DDL
statement (dropping a table that doesn\u2019t exists for example) and therefore nothing to worry about. However it\u2019s always recommended to check these log to understand what\u2019s is happening.
To be able to analyze these files binlog header needs to be added to the log file. To create the GRA_HEADER
file you need an instance running with binlog_checksum
set to NONE
and extract first 120 bytes from the binlog file:
$ head -c 123 mysqld-bin.000001 > GRA_HEADER\n$ cat GRA_HEADER > /var/lib/mysql/GRA_1_2-bin.log\n$ cat /var/lib/mysql/GRA_1_2.log >> /var/lib/mysql/GRA_1_2-bin.log\n$ mysqlbinlog -vvv /var/lib/mysql/GRA_1_2-bin.log\n\n/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;\n/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;\nDELIMITER /*!*/;\n# at 4\n#160809 16:04:05 server id 3 end_log_pos 123 Start: binlog v 4, server v 8.0-log created 160809 16:04:05 at startup\n# Warning: this binlog is either in use or was not closed properly.\nROLLBACK/*!*/;\nBINLOG '\nnbGpVw8DAAAAdwAAAHsAAAABAAQANS43LjEyLTVyYzEtbG9nAAAAAAAAAAAAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAAACdsalXEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQA\nALfQ8hw=\n'/*!*/;\n# at 123\n#160809 16:05:49 server id 2 end_log_pos 75 Query thread_id=11 exec_time=0 error_code=0\nuse `test`/*!*/;\nSET TIMESTAMP=1470738949/*!*/;\nSET @@session.pseudo_thread_id=11/*!*/;\nSET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;\nSET @@session.sql_mode=1436549152/*!*/;\nSET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;\n/*!\\C utf8 *//*!*/;\nSET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;\nSET @@session.lc_time_names=0/*!*/;\nSET @@session.collation_database=DEFAULT/*!*/;\ndrop table t\n/*!*/;\nSET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;\nDELIMITER ;\n# End of log file\n/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;\n/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;\n
This information can be used for checking the MySQL error log for the corresponding error message.
Error message160805 9:33:37 8:52:21 [ERROR] Slave SQL: Error 'Unknown table 'test'' on query. Default database: 'test'. Query: 'drop table test', Error_code: 1051\n160805 9:33:37 8:52:21 [Warning] WSREP: RBR event 1 Query apply warning: 1, 3\n
In this example DROP TABLE
statement was executed on a table that doesn\u2019t exist.
gcache.page
See gcache.page_size
See also
Percona Database Performance Blog: All You Need to Know About GCache (Galera-Cache) https://www.percona.com/blog/2016/11/16/all-you-need-to-know-about-gcache-galera-cache/
galera.cache
This file is used as a main writeset store. It\u2019s implemented as a permanent ring-buffer file that is preallocated on disk when the node is initialized. File size can be controlled with the variable gcache.size
. If this value is bigger, more writesets are cached and chances are better that the re-joining node will get IST instead of SST. Filename can be changed with the gcache.name
variable.
grastate.dat
This file contains the Galera state information.
version
- grastate version
uuid
- a unique identifier for the state and the sequence of changes it undergoes.For more information on how UUID is generated see UUID.
seqno
- Ordinal Sequence Number, a 64-bit signed integer used to denote the position of the change in the sequence. seqno
is 0
when no writesets have been generated or applied on that node, i.e., not applied/generated across the lifetime of a grastate
file. -1
is a special value for the seqno
that is kept in the grastate.dat
while the server is running to allow Galera to distinguish between a clean and an unclean shutdown. Upon a clean shutdown, the correct seqno
value is written to the file. So, when the server is brought back up, if the value is still -1
, this means that the server did not shut down cleanly. If the value is greater than 0
, this means that the shutdown was clean. -1
is then written again to the file in order to allow the server to correctly detect if the next shutdown was clean in the same manner.
cert_index
- cert index restore through grastate is not implemented yet
Examples of this file look like this:
In case server node has this state when not running it means that that node crashed during the transaction processing.
# GALERA saved state\nversion: 2.1\nuuid: 1917033b-7081-11e2-0800-707f5d3b106b\nseqno: -1\ncert_index:\n
In case server node has this state when not running it means that the node was gracefully shut down.
# GALERA saved state\nversion: 2.1\nuuid: 1917033b-7081-11e2-0800-707f5d3b106b\nseqno: 5192193423942\ncert_index:\n
In case server node has this state when not running it means that the node crashed during the DDL.
# GALERA saved state\nversion: 2.1\nuuid: 00000000-0000-0000-0000-000000000000\nseqno: -1\ncert_index:\n
gvwstate.dat
This file is used for Primary Component recovery feature. This file is created once primary component is formed or changed, so you can get the latest primary component this node was in. And this file is deleted when the node is shutdown gracefully.
First part contains the node UUID information. Second part contains the view information. View information is written between #vwbeg
and #vwend
. View information consists of:
* view_id: [view_type] [view_uuid] [view_seq]. - `view_type` is always `3` which means primary view. `view_uuid` and `view_seq` identifies a unique view, which could be perceived as identifier of this primary component.\n\n* bootstrap: [bootstarp_or_not]. - it could be `0` or `1`, but it does not affect primary component recovery process now.\n\n* member: [node\u2019s uuid] [node\u2019s segment]. - it represents all nodes in this primary component.\n\n??? example \"Example of the file\"\n\n ```{.text .no-copy}\n my_uuid: c5d5d990-30ee-11e4-aab1-46d0ed84b408\n #vwbeg\n view_id: 3 bc85bd53-31ac-11e4-9895-1f2ce13f2542 2 \n bootstrap: 0\n member: bc85bd53-31ac-11e4-9895-1f2ce13f2542 0\n member: c5d5d990-30ee-11e4-aab1-46d0ed84b408 0\n #vwend\n ```\n
"},{"location":"wsrep-files-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-provider-index.html","title":"Index of wsrep_provider options","text":"The following variables can be set and checked in the wsrep_provider_options
variable. The value of the variable can be changed in the MySQL configuration file, my.cnf
, or by setting the variable value in the MySQL client.
To change the value in my.cnf
, the following syntax should be used:
$ wsrep_provider_options=\"variable1=value1;[variable2=value2]\"\n
For example to set the size of the Galera buffer storage to 512 MB, specify the following in my.cnf
:
$ wsrep_provider_options=\"gcache.size=512M\"\n
Dynamic variables can be changed from the MySQL client using the SET GLOBAL
command. For example, to change the value of the pc.ignore_sb
, use the following command:
mysql> SET GLOBAL wsrep_provider_options=\"pc.ignore_sb=true\";\n
"},{"location":"wsrep-provider-index.html#index","title":"Index","text":""},{"location":"wsrep-provider-index.html#base_dir","title":"base_dir
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of datadir
This variable specifies the data directory.
"},{"location":"wsrep-provider-index.html#base_host","title":"base_host
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address
This variable sets the value of the node\u2019s base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
"},{"location":"wsrep-provider-index.html#base_port","title":"base_port
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 4567 This variable sets the port on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
"},{"location":"wsrep-provider-index.html#certlog_conflicts","title":"cert.log_conflicts
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: no This variable is used to specify if the details of the certification failures should be logged.
"},{"location":"wsrep-provider-index.html#certoptimistic_pa","title":"cert.optimistic_pa
","text":"Enabled
Allows the full range of parallelization as determined by the certification\nalgorithm.\n
Disabled
Limits the parallel applying window so that it does not exceed the parallel\napplying window seen on the source. In this case, the action starts applying\nno sooner than all actions on the source are committed.\n
Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: No See also
Galera Cluster Documentation: * Parameter: cert.optimistic_pa * Setting parallel slave threads
"},{"location":"wsrep-provider-index.html#debug","title":"debug
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: no When this variable is set to yes
, it will enable debugging.
evs.auto_evict
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0 Number of entries allowed on delayed list until auto eviction takes place. Setting value to 0
disables auto eviction protocol on the node, though node response times will still be monitored. EVS protocol version (evs.version
) 1
is required to enable auto eviction.
evs.causal_keepalive_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of evs.keepalive_period
This variable is used for development purposes and shouldn\u2019t be used by regular users.
"},{"location":"wsrep-provider-index.html#evsdebug_log_mask","title":"evs.debug_log_mask
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0x1 This variable is used for EVS (Extended Virtual Synchrony) debugging. It can be used only when wsrep_debug
is set to ON
.
evs.delay_margin
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT1S Time period that a node can delay its response from expected until it is added to delayed list. The value must be higher than the highest RTT between nodes.
"},{"location":"wsrep-provider-index.html#evsdelayed_keep_period","title":"evs.delayed_keep_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S Time period that node is required to remain responsive until one entry is removed from delayed list.
"},{"location":"wsrep-provider-index.html#evsevict","title":"evs.evict
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Manual eviction can be triggered by setting the evs.evict
to a certain node value. Setting the evs.evict
to an empty string will clear the evict list on the node where it was set.
evs.inactive_check_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT0.5S This variable defines how often to check for peer inactivity.
"},{"location":"wsrep-provider-index.html#evsinactive_timeout","title":"evs.inactive_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT15S This variable defines the inactivity limit, once this limit is reached the node will be considered dead.
"},{"location":"wsrep-provider-index.html#evsinfo_log_mask","title":"evs.info_log_mask
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable is used for controlling the extra EVS info logging.
"},{"location":"wsrep-provider-index.html#evsinstall_timeout","title":"evs.install_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT7.5S This variable defines the timeout on waiting for install message acknowledgments.
"},{"location":"wsrep-provider-index.html#evsjoin_retrans_period","title":"evs.join_retrans_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S This variable defines how often to retransmit EVS join messages when forming cluster membership.
"},{"location":"wsrep-provider-index.html#evskeepalive_period","title":"evs.keepalive_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S This variable defines how often to emit keepalive beacons (in the absence of any other traffic).
"},{"location":"wsrep-provider-index.html#evsmax_install_timeouts","title":"evs.max_install_timeouts
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1 This variable defines how many membership install rounds to try before giving up (total rounds will be evs.max_install_timeouts
+ 2).
evs.send_window
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 10 This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example,512). The value must not be less than evs.user_send_window
.
evs.stats_report_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1M This variable defines the control period of EVS statistics reporting.
"},{"location":"wsrep-provider-index.html#evssuspect_timeout","title":"evs.suspect_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S This variable defines the inactivity period after which the node is \u201csuspected\u201d to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before evs.inactive_timeout
is reached.
evs.use_aggregate
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true When this variable is enabled, smaller packets will be aggregated into one.
"},{"location":"wsrep-provider-index.html#evsuser_send_window","title":"evs.user_send_window
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 4 This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example, 512).
"},{"location":"wsrep-provider-index.html#evsversion","title":"evs.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable defines the EVS protocol version. Auto eviction is enabled when this variable is set to 1
. Default 0
is set for backwards compatibility.
evs.view_forget_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: P1D This variable defines the timeout after which past views will be dropped from history.
"},{"location":"wsrep-provider-index.html#gcachedir","title":"gcache.dir
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: datadir
This variable can be used to define the location of the galera.cache
file.
gcache.freeze_purge_at_seqno
","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0 This variable controls the purging of the gcache and enables retaining more data in it. This variable makes it possible to use IST (Incremental State Transfer) when the node rejoins instead of SST (State Snapshot Transfer).
Set this variable on an existing node of the cluster (that will continue to be part of the cluster and can act as a potential donor node). This node continues to retain the write-sets and allows restarting the node to rejoin by using IST.
See also
Percona Database Performance Blog:
All You Need to Know About GCache (Galera-Cache)
Want IST Not SST for Node Rejoins? We Have a Solution!
The gcache.freeze_purge_at_seqno
variable takes three values:
-1 (default)
No freezing of gcache, the purge operates as normal.
A valid seqno in gcache
The freeze purge of write-sets may not be smaller than the selected seqno. The best way to select an optimal value is to use the value of the variable :variable:wsrep_last_applied
from the node that you plan to shut down.
now The freeze purge of write-sets is no less than the smallest seqno currently in gcache. Using this value results in freezing the gcache-purge instantly. Use this value if selecting a valid seqno in gcache is difficult.
"},{"location":"wsrep-provider-index.html#gcachekeep_pages_count","title":"gcache.keep_pages_count
","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0 This variable is used to limit the number of overflow pages rather than the total memory occupied by all overflow pages. Whenever gcache.keep_pages_count
is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest).
Whenever either the gcache.keep_pages_count
or the gcache.keep_pages_size
variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.
gcache.keep_pages_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: No Default Value: 0 This variable is used to limit the total size of overflow pages rather than the count of all overflow pages. Whenever gcache.keep_pages_size
is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest) until the total size is below the specified value.
Whenever either the gcache.keep_pages_count
or the gcache.keep_pages_size
variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.
gcache.mem_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable has been deprecated in 5.6.22-25.8
and shouldn\u2019t be used as it could cause a node to crash.
This variable was used to define how much RAM is available for the system.
"},{"location":"wsrep-provider-index.html#gcachename","title":"gcache.name
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql/galera.cache This variable can be used to specify the name of the Galera cache file.
"},{"location":"wsrep-provider-index.html#gcachepage_size","title":"gcache.page_size
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: 128M Size of the page files in page storage. The limit on overall page storage is the size of the disk. Pages are prefixed by gcache.page.
See also
Galera Documentation: gcache.page_size
Percona Database Performance Blog: All You Need to Know About GCache
gcache.recover
","text":"Option Description Command line: No Configuration file: Yes Scope: Global Dynamic: No Default value: No Attempts to recover a node\u2019s gcache file to a usable state on startup. If the node can successfully recover the gcache file, the node can provide IST to the remaining nodes. This ability can reduce the time needed to bring up the cluster.
An example of enabling the variable in the configuration file:
wsrep_provider_options=\"gcache.recover=yes\"\n
"},{"location":"wsrep-provider-index.html#gcachesize","title":"gcache.size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 128M Size of the transaction cache for Galera replication. This defines the size of the galera.cache
file which is used as source for IST. The bigger the value of this variable, the better are chances that the re-joining node will get IST instead of SST.
gcomm.thread_prio
","text":"Using this option, you can raise the priority of the gcomm thread to a higher level than it normally uses.
The format for this variable is: <policy>:<priority>. The priority value is an integer.
other
Default time-sharing scheduling in Linux. The threads can run\nuntil blocked by an I/O request or preempted by higher priorities or\nsuperior scheduling designations.\n
fifo
First-in First-out (FIFO) scheduling. These threads always immediately\npreempt any currently running other, batch or idle threads. They can run\nuntil they are either blocked by an I/O request or preempted by a FIFO thread\nof a higher priority.\n
rr
Round-robin scheduling. These threads always preempt any currently running\nother, batch or idle threads. The scheduler allows these threads to run for a\nfixed period of a time. If the thread is still running when this time period is\nexceeded, they are stopped and moved to the end of the list, allowing another\nround-robin thread of the same priority to run in their place. They can\notherwise continue to run until they are blocked by an I/O request or are\npreempted by threads of a higher priority.\n
See also
For information, see the Galera Cluster documentation
"},{"location":"wsrep-provider-index.html#gcsfc_auto_evict_threshold","title":"gcs.fc_auto_evict_threshold
","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0.75 Implemented in Percona XtraDB Cluster 8.0.33-25.
Defines the threshold that must be reached or crossed before a node is evicted from the cluster. This variable is a ratio of the gcs.fc_auto_evict_window
variable. The default value is .075
, but the value can be set to any value between 0.0 and 1.0.
gcs.fc_auto_evict_window
","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0 Implemented in Percona XtraDB Cluster 8.0.33-25.
The variable defines the time window width within which flow controls are observed. The time span of the window is [now - gcs.fc_audot_evict_window, now]. The window is constantly moving ahead as the time passes. And now, within this window if the flow control summary time >= (gcs.fc_audot-evict_window * gcs.fc_audot_evict_threshold), the node self-leaves the cluster.
The default value is 0, which means that the feature is disabled.
The maximum value is DBL_MAX
.
gcs.fc_debug
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable specifies after how many writesets the debug statistics about SST flow control will be posted.
"},{"location":"wsrep-provider-index.html#gcsfc_factor","title":"gcs.fc_factor
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1 This variable is used for replication flow control. Replication is resumed when the replica queue drops below gcs.fc_factor
* gcs.fc_limit
.
gcs.fc_limit
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 100 This variable is used for replication flow control. Replication is paused when the replica queue exceeds this limit. In the default operation mode, flow control limit is dynamically recalculated based on the amount of nodes in the cluster, but this recalculation can be turned off with use of the gcs.fc_master_slave
variable to make manual setting of the gcs.fc_limit
having an effect (e.g., for configurations when writing is done to a single node in Percona XtraDB Cluster).
gcs.fc_master_slave
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: NO Default Value: NO This variable is used to specify if there is only one source node in the cluster. It affects whether flow control limit is recalculated dynamically (when NO
) or not (when YES
).
gcs.max_packet_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 64500 This variable is used to specify the writeset size after which they will be fragmented.
"},{"location":"wsrep-provider-index.html#gcsmax_throttle","title":"gcs.max_throttle
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25 This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to 0.0
if stopping replication is acceptable in order to finish state transfer.
gcs.recv_q_hard_limit
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 9223372036854775807 This variable specifies the maximum allowed size of the receive queue. This should normally be (RAM + swap) / 2
. If this limit is exceeded, Galera will abort the server.
gcs.recv_q_soft_limit
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25 This variable specifies the fraction of the gcs.recv_q_hard_limit
after which replication rate will be throttled.
gcs.sync_donor
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to YES
, the whole cluster will be blocked if the donor node is blocked with SST.
gmcast.listen_addr
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: tcp://0.0.0.0:4567 This variable defines the address on which the node listens to connections from other nodes in the cluster.
"},{"location":"wsrep-provider-index.html#gmcastmcast_addr","title":"gmcast.mcast_addr
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: None This variable should be set up if UDP multicast should be used for replication.
"},{"location":"wsrep-provider-index.html#gmcastmcast_ttl","title":"gmcast.mcast_ttl
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1 This variable can be used to define TTL for multicast packets.
"},{"location":"wsrep-provider-index.html#gmcastpeer_timeout","title":"gmcast.peer_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S This variable specifies the connection timeout to initiate message relaying.
"},{"location":"wsrep-provider-index.html#gmcastsegment","title":"gmcast.segment
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable specifies the group segment this member should be a part of. Same segment members are treated as equally physically close.
"},{"location":"wsrep-provider-index.html#gmcasttime_wait","title":"gmcast.time_wait
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.
"},{"location":"wsrep-provider-index.html#gmcastversion","title":"gmcast.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable shows which gmcast protocol version is being used.
"},{"location":"wsrep-provider-index.html#istrecv_addr","title":"ist.recv_addr
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address
This variable specifies the address on which the node listens for Incremental State Transfer (IST).
"},{"location":"wsrep-provider-index.html#pcannounce_timeout","title":"pc.announce_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S Cluster joining announcements are sent every \u00bd second for this period of time or less if other nodes are discovered.
"},{"location":"wsrep-provider-index.html#pcchecksum","title":"pc.checksum
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true This variable controls whether replicated messages should be checksummed or not.
"},{"location":"wsrep-provider-index.html#pcignore_quorum","title":"pc.ignore_quorum
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false When this variable is set to TRUE
, the node will completely ignore quorum calculations. This should be used with extreme caution even in source-replica setups, because replicas won\u2019t automatically reconnect to source in this case.
pc.ignore_sb
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: false When this variable is set to TRUE
, the node will process updates even in the case of a split brain. This should be used with extreme caution in multi-source setup, but should simplify things in source-replica cluster (especially if only 2 nodes are used).
pc.linger
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT20S This variable specifies the period for which the PC protocol waits for EVS termination.
"},{"location":"wsrep-provider-index.html#pcnpvo","title":"pc.npvo
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false When this variable is set to TRUE
, more recent primary components override older ones in case of conflicting primaries.
pc.recovery
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true When this variable is set to true
, the node stores the Primary Component state to disk. The Primary Component can then recover automatically when all nodes that were part of the last saved state re-establish communication with each other. This feature allows automatic recovery from full cluster crashes, such as in the case of a data center power outage. A subsequent graceful full cluster restart will require explicit bootstrapping for a new Primary Component.
pc.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This status variable is used to check which PC protocol version is used.
"},{"location":"wsrep-provider-index.html#pcwait_prim","title":"pc.wait_prim
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true When set to TRUE
, the node waits for a primary component for the period of time specified in pc.wait_prim_timeout
. This is useful to bring up a non-primary component and make it primary with pc.bootstrap
.
pc.wait_prim_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT30S This variable is used to specify the period of time to wait for a primary component.
"},{"location":"wsrep-provider-index.html#pcwait_restored_prim_timeout","title":"pc.wait_restored_prim_timeout
","text":"Introduced in Percona XtraDB Cluster 8.0.33-25.
Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic No Default Value: PT0SThis variable specifies the wait period for a primary component when the cluster restores the primary component from the gvwstate.dat
file after an outage.
The default value is PT0S
(zero seconds). The node waits for an infinite time, which is the current behavior.
You can define a wait time with PTNS
, replace the N
value with the number of seconds. For example, to wait for 90 seconds, set the value to PT90S
.
pc.weight
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1 This variable specifies the node weight that\u2019s going to be used for Weighted Quorum calculations.
"},{"location":"wsrep-provider-index.html#protonetbackend","title":"protonet.backend
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: asio This variable is used to define which transport backend should be used. Currently only ASIO
is supported.
protonet.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This status variable is used to check which transport backend protocol version is used.
"},{"location":"wsrep-provider-index.html#replcausal_read_timeout","title":"repl.causal_read_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S This variable specifies the causal read timeout.
"},{"location":"wsrep-provider-index.html#replcommit_order","title":"repl.commit_order
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 3 This variable is used to specify out-of-order committing (which is used to improve parallel applying performance). The following values are available:
0
- BYPASS: all commit order monitoring is turned off (useful for measuring performance penalty)
1
- OOOC: allow out-of-order committing for all transactions
2
- LOCAL_OOOC: allow out-of-order committing only for local transactions
3
- NO_OOOC: no out-of-order committing is allowed (strict total order committing)
repl.key_format
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: FLAT8 This variable is used to specify the replication key format. The following values are available:
FLAT8
- short key with higher probability of key match false positives
FLAT16
- longer key with lower probability of false positives
FLAT8A
- same as FLAT8
but with annotations for debug purposes
FLAT16A
- same as FLAT16
but with annotations for debug purposes
repl.max_ws_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2147483647 This variable is used to specify the maximum size of a write-set in bytes. This is limited to 2 gygabytes.
"},{"location":"wsrep-provider-index.html#replproto_max","title":"repl.proto_max
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 7 This variable is used to specify the highest communication protocol version to accept in the cluster. Used only for debugging.
"},{"location":"wsrep-provider-index.html#socketchecksum","title":"socket.checksum
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2 This variable is used to choose the checksum algorithm for network packets. The CRC32-C
option is optimized and may be hardware accelerated on Intel CPUs. The following values are available:
0
- disable checksum
1
- plain CRC32
(used in Galera 2.x)
2
- hardware accelerated CRC32-C
The following is an example of the variable use:
wsrep_provider_options=\"socket.checksum=2\"\n
"},{"location":"wsrep-provider-index.html#socketssl","title":"socket.ssl
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No This variable is used to specify if SSL encryption should be used.
"},{"location":"wsrep-provider-index.html#socketssl_ca","title":"socket.ssl_ca
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No This variable is used to specify the path to the Certificate Authority (CA) certificate file.
"},{"location":"wsrep-provider-index.html#socketssl_cert","title":"socket.ssl_cert
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No This variable is used to specify the path to the server\u2019s certificate file (in PEM format).
"},{"location":"wsrep-provider-index.html#socketssl_key","title":"socket.ssl_key
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No This variable is used to specify the path to the server\u2019s private key file (in PEM format).
"},{"location":"wsrep-provider-index.html#socketssl_compression","title":"socket.ssl_compression
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: Yes This variable is used to specify if the SSL compression is to be used.
"},{"location":"wsrep-provider-index.html#socketssl_cipher","title":"socket.ssl_cipher
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: AES128-SHA This variable is used to specify what cypher will be used for encryption.
"},{"location":"wsrep-provider-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-status-index.html","title":"Index of wsrep status variables","text":""},{"location":"wsrep-status-index.html#wsrep_apply_oooe","title":"wsrep_apply_oooe
","text":"This variable shows parallelization efficiency, how often writests have been applied out of order.
See also
Galera status variable: wsrep_apply_oooe
wsrep_apply_oool
","text":"This variable shows how often a writeset with a higher sequence number was applied before one with a lower sequence number.
See also
Galera status variable: wsrep_apply_oool
wsrep_apply_window
","text":"Average distance between highest and lowest concurrently applied sequence numbers.
See also
Galera status variable: wsrep_apply_window
wsrep_causal_reads
","text":"Shows the number of writesets processed while the variable wsrep_causal_reads
was set to ON
.
See also
MySQL wsrep options: wsrep_causal_reads
wsrep_cert_bucket_count
","text":"This variable, shows the number of cells in the certification index hash-table.
"},{"location":"wsrep-status-index.html#wsrep_cert_deps_distance","title":"wsrep_cert_deps_distance
","text":"Average distance between highest and lowest sequence number that can be possibly applied in parallel.
See also
Galera status variable: wsrep_cert_deps_distance
wsrep_cert_index_size
","text":"Number of entries in the certification index.
See also
Galera status variable: wsrep_cert_index_size
wsrep_cert_interval
","text":"Average number of write-sets received while a transaction replicates.
See also
Galera status variable: wsrep_cert_interval
wsrep_cluster_conf_id
","text":"Number of cluster membership changes that have taken place.
See also
Galera status variable: wsrep_cluster_conf_id
wsrep_cluster_size
","text":"Current number of nodes in the cluster.
See also
Galera status variable: wsrep_cluster_size
wsrep_cluster_state_uuid
","text":"This variable contains UUID state of the cluster. When this value is the same as the one in wsrep_local_state_uuid
, node is synced with the cluster.
See also
Galera status variable: wsrep_cluster_state_uuid
wsrep_cluster_status
","text":"Status of the cluster component. Possible values are:
Primary
Non-Primary
Disconnected
See also
Galera status variable: wsrep_cluster_status
wsrep_commit_oooe
","text":"This variable shows how often a transaction was committed out of order.
See also
Galera status variable: wsrep_commit_oooe
wsrep_commit_oool
","text":"This variable currently has no meaning.
See also
Galera status variable: wsrep_commit_oool
wsrep_commit_window
","text":"Average distance between highest and lowest concurrently committed sequence number.
See also
Galera status variable: wsrep_commit_window
wsrep_connected
","text":"This variable shows if the node is connected to the cluster. If the value is OFF
, the node has not yet connected to any of the cluster components. This may be due to misconfiguration.
See also
Galera status variable: wsrep_connected
wsrep_evs_delayed
","text":"Comma separated list of nodes that are considered delayed. The node format is <uuid>:<address>:<count>
, where <count>
is the number of entries on delayed list for that node.
See also
Galera status variable: wsrep_evs_delayed
wsrep_evs_evict_list
","text":"List of UUIDs of the evicted nodes.
See also
Galera status variable: wsrep_evs_evict_list
wsrep_evs_repl_latency
","text":"This status variable provides information regarding group communication replication latency. This latency is measured in seconds from when a message is sent out to when a message is received.
The format of the output is <min>/<avg>/<max>/<std_dev>/<sample_size>
.
See also
Galera status variable: wsrep_evs_repl_latency
wsrep_evs_state
","text":"Internal EVS protocol state.
See also
Galera status variable: wsrep_evs_state
wsrep_flow_control_interval
","text":"This variable shows the lower and upper limits for Galera flow control. The upper limit is the maximum allowed number of requests in the queue. If the queue reaches the upper limit, new requests are denied. As existing requests get processed, the queue decreases, and once it reaches the lower limit, new requests will be allowed again.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_high","title":"wsrep_flow_control_interval_high
","text":"Shows the upper limit for flow control to trigger.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_low","title":"wsrep_flow_control_interval_low
","text":"Shows the lower limit for flow control to stop.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_paused","title":"wsrep_flow_control_paused
","text":"Time since the last status query that was paused due to flow control.
See also
Galera status variable: wsrep_flow_control_paused
wsrep_flow_control_paused_ns
","text":"Total time spent in a paused state measured in nanoseconds.
See also
Galera status variable: wsrep_flow_control_paused_ns
wsrep_flow_control_recv
","text":"The number of FC_PAUSE
events received since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.
See also
Galera status variable: wsrep_flow_control_recv
wsrep_flow_control_requested
","text":"This variable returns whether or not a node requested a replication pause.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_sent","title":"wsrep_flow_control_sent
","text":"The number of FC_PAUSE
events sent since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.
See also
Galera status variable: wsrep_flow_control_sent
wsrep_flow_control_status
","text":"This variable shows whether a node has flow control enabled for normal traffic. It does not indicate the status of flow control during SST.
"},{"location":"wsrep-status-index.html#wsrep_gcache_pool_size","title":"wsrep_gcache_pool_size
","text":"This variable shows the size of the page pool and dynamic memory allocated for GCache (in bytes).
"},{"location":"wsrep-status-index.html#wsrep_gcomm_uuid","title":"wsrep_gcomm_uuid
","text":"This status variable exposes UUIDs in gvwstate.dat
, which are Galera view IDs (thus unrelated to cluster state UUIDs). This UUID is unique for each node. You will need to know this value when using manual eviction feature.
See also
Galera status variable: wsrep_gcomm_uuid
wsrep_incoming_addresses
","text":"Shows the comma-separated list of incoming node addresses in the cluster.
See also
Galera status variable: wsrep_incoming_addresses
wsrep_ist_receive_status
","text":"This variable displays the progress of IST for joiner node. If IST is not running, the value is blank. If IST is running, the value is the percentage of transfer completed.
"},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_end","title":"wsrep_ist_receive_seqno_end
","text":"The sequence number of the last transaction in IST.
"},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_current","title":"wsrep_ist_receive_seqno_current
","text":"The sequence number of the current transaction in IST.
"},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_start","title":"wsrep_ist_receive_seqno_start
","text":"The sequence number of the first transaction in IST.
"},{"location":"wsrep-status-index.html#wsrep_last_applied","title":"wsrep_last_applied
","text":"Sequence number of the last applied transaction.
"},{"location":"wsrep-status-index.html#wsrep_last_committed","title":"wsrep_last_committed
","text":"Sequence number of the last committed transaction.
"},{"location":"wsrep-status-index.html#wsrep_local_bf_aborts","title":"wsrep_local_bf_aborts
","text":"Number of local transactions that were aborted by replica transactions while being executed.
See also
Galera status variable: wsrep_local_bf_aborts
wsrep_local_cached_downto
","text":"The lowest sequence number in GCache. This information can be helpful with determining IST and SST. If the value is 0
, then it means there are no writesets in GCache (usual for a single node).
See also
Galera status variable: wsrep_local_cached_downto
wsrep_local_cert_failures
","text":"Number of writesets that failed the certification test.
See also
Galera status variable: wsrep_local_cert_failures
wsrep_local_commits
","text":"Number of writesets commited on the node.
See also
Galera status variable: wsrep_local_commits
wsrep_local_index
","text":"Node\u2019s index in the cluster.
See also
Galera status variable: wsrep_local_index
wsrep_local_recv_queue
","text":"Current length of the receive queue (that is, the number of writesets waiting to be applied).
See also
Galera status variable: wsrep_local_recv_queue
wsrep_local_recv_queue_avg
","text":"Average length of the receive queue since the last status query. When this number is bigger than 0
this means node can\u2019t apply writesets as fast as they are received. This could be a sign that the node is overloaded and it may cause replication throttling.
See also
Galera status variable: wsrep_local_recv_queue_avg
wsrep_local_replays
","text":"Number of transaction replays due to asymmetric lock granularity.
See also
Galera status variable: wsrep_local_replays
wsrep_local_send_queue
","text":"Current length of the send queue (that is, the number of writesets waiting to be sent).
See also
Galera status variable: wsrep_local_send_queue
wsrep_local_send_queue_avg
","text":"Average length of the send queue since the last status query. When cluster experiences network throughput issues or replication throttling, this value will be significantly bigger than 0
.
See also
Galera status variable: wsrep_local_send_queue_avg
wsrep_local_state
","text":"Internal Galera cluster FSM state number.
See also
Galera status variable: wsrep_local_state
wsrep_local_state_comment
","text":"Internal number and the corresponding human-readable comment of the node\u2019s state. Possible values are:
Num Comment Description 1 Joining Node is joining the cluster 2 Donor/Desynced Node is the donor to the node joining the cluster 3 Joined Node has joined the cluster 4 Synced Node is synced with the clusterSee also
Galera status variable: wsrep_local_state_comment
wsrep_local_state_uuid
","text":"The UUID of the state stored on the node.
See also
Galera status variable: wsrep_local_state_uuid
wsrep_monitor_status
","text":"The status of the local monitor (local and replicating actions), apply monitor (apply actions of write-set), and commit monitor (commit actions of write sets). In the value of this variable, each monitor (L: Local, A: Apply, C: Commit) is represented as a last_entered, and last_left pair:
wsrep_monitor_status (L/A/C) [ ( 7, 5), (2, 2), ( 2, 2) ]\n
last_entered
Shows which transaction or write-set has recently entered the queue.
last_left
Shows which last transaction or write-set has been executed and left the queue.
According to the Galera protocol, transactions can be applied in parallel but must be committed in a given order. This rule implies that there can be multiple transactions in the apply state at a given point of time but transactions are committed sequentially.
See also
Galera Documentation: Database replication
wsrep_protocol_version
","text":"Version of the wsrep protocol used.
See also
Galera status variable: wsrep_protocol_version
wsrep_provider_name
","text":"Name of the wsrep provider (usually Galera
).
See also
Galera status variable: wsrep_provider_name
wsrep_provider_vendor
","text":"Name of the wsrep provider vendor (usually Codership Oy
)
See also
Galera status variable: wsrep_provider_vendor
wsrep_provider_version
","text":"Current version of the wsrep provider.
See also
Galera status variable: wsrep_provider_version
wsrep_ready
","text":"This variable shows if node is ready to accept queries. If status is OFF
, almost all queries will fail with ERROR 1047 (08S01) Unknown Command
error (unless the wsrep_on
variable is set to 0
).
See also
Galera status variable: wsrep_ready
wsrep_received
","text":"Total number of writesets received from other nodes.
See also
Galera status variable: wsrep_received
wsrep_received_bytes
","text":"Total size (in bytes) of writesets received from other nodes.
"},{"location":"wsrep-status-index.html#wsrep_repl_data_bytes","title":"wsrep_repl_data_bytes
","text":"Total size (in bytes) of data replicated.
"},{"location":"wsrep-status-index.html#wsrep_repl_keys","title":"wsrep_repl_keys
","text":"Total number of keys replicated.
"},{"location":"wsrep-status-index.html#wsrep_repl_keys_bytes","title":"wsrep_repl_keys_bytes
","text":"Total size (in bytes) of keys replicated.
"},{"location":"wsrep-status-index.html#wsrep_repl_other_bytes","title":"wsrep_repl_other_bytes
","text":"Total size of other bits replicated.
"},{"location":"wsrep-status-index.html#wsrep_replicated","title":"wsrep_replicated
","text":"Total number of writesets sent to other nodes.
See also
Galera status variable: wsrep_replicated
wsrep_replicated_bytes
","text":"Total size of replicated writesets. To compute the actual size of bytes sent over network to cluster peers, multiply the value of this variable by the number of cluster peers in the given network segment
.
See also
Galera status variable: wsrep_replicated_bytes
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-system-index.html","title":"Index of wsrep system variables","text":"Percona XtraDB Cluster introduces a number of MySQL system variables related to write-set replication.
"},{"location":"wsrep-system-index.html#pxc_encrypt_cluster_traffic","title":"pxc_encrypt_cluster_traffic
","text":"Option Description Command Line: --pxc-encrypt-cluster-traffic
Config File: Yes Scope: Global Dynamic: No Default Value: ON
Enables automatic configuration of SSL encryption. When disabled, you need to configure SSL manually to encrypt Percona XtraDB Cluster traffic.
Possible values:
ON
, 1
, true
: Enabled (default)
OFF
, 0
, false
: Disabled
For more information, see SSL Automatic Configuration.
"},{"location":"wsrep-system-index.html#pxc_maint_mode","title":"pxc_maint_mode
","text":"Option Description Command Line: --pxc-maint-mode
Config File: Yes Scope: Global Dynamic: Yes Default Value: DISABLED
Specifies the maintenance mode for taking a node down without adjusting settings in ProxySQL.
The following values are available:
DISABLED
: This is the default state that tells ProxySQL to route traffic to the node as usual.
SHUTDOWN
: This state is set automatically when you initiate node shutdown.
MAINTENANCE
: You can manually change to this state if you need to perform maintenance on a node without shutting it down.
For more information, see Assisted Maintenance Mode.
"},{"location":"wsrep-system-index.html#pxc_maint_transition_period","title":"pxc_maint_transition_period
","text":"Option Description Command Line: --pxc-maint-transition-period
Config File: Yes Scope: Global Dynamic: Yes Default Value: 10
(ten seconds) Defines the transition period when you change pxc_maint_mode
to SHUTDOWN
or MAINTENANCE
. By default, the period is set to 10 seconds, which should be enough for most transactions to finish. You can increase the value to accommodate for longer-running transactions.
For more information, see Assisted Maintenance Mode.
"},{"location":"wsrep-system-index.html#pxc_strict_mode","title":"pxc_strict_mode
","text":"Option Description Command Line: --pxc-strict-mode
Config File: Yes Scope: Global Dynamic: Yes Default Value: ENFORCING
or DISABLED
Controls PXC Strict Mode, which runs validations to avoid the use of experimental and unsupported features in Percona XtraDB Cluster.
Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:
DISABLED
: Do not perform strict mode validations and run as normal.
PERMISSIVE
: If a validation fails, log a warning and continue running as normal.
ENFORCING
: If a validation fails during startup, halt the server and throw an error. If a validation fails during runtime, deny the operation and throw an error.
MASTER
: The same as ENFORCING
except that the validation of explicit table locking is not performed. This mode can be used with clusters in which write operations are isolated to a single node.
By default, pxc_strict_mode
is set to ENFORCING
, except if the node is acting as a standalone server or the node is bootstrapping, then pxc_strict_mode
defaults to DISABLED
.
Note
When changing the value of pxc_strict_mode
from DISABLED
or PERMISSIVE
to ENFORCING
or MASTER
, ensure that the following configuration is used:
wsrep_replicate_myisam=OFF
binlog_format=ROW
log_output=FILE
or log_output=NONE
or log_output=FILE,NONE
The SERIALIZABLE
method of isolation is not allowed in ENFORCING
mode.
For more information, see PXC Strict Mode.
"},{"location":"wsrep-system-index.html#wsrep_applier_fk_checks","title":"wsrep_applier_FK_checks
","text":"Option Description Command Line: --wsrep-applier-FK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_FK_checks
variable is deprecated in favor of this variable.
Defines whether foreign key checking is done for applier threads. This is enabled by default.
See also
MySQL wsrep option: wsrep_applier_FK_checks
wsrep_applier_threads
","text":"Option Description Command Line: --wsrep-applier-threads
Config File: Yes Scope: Global Dynamic: Yes Default Value: 1
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_threads
variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads
variable.
Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.
Note
When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).
If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1
to see if that resolves the issue. The default value can be increased for better throughput.
You may want to increase it as suggested in Codership documentation for flow control
: when the node is in JOINED
state, increasing the number of replica threads can speed up the catchup to SYNCED
.
You can also estimate the optimal value for this from wsrep_cert_deps_distance
as suggested in the Galera Cluster documentation.
For more configuration tips, see Setting Parallel Slave Threads`.
See also
MySQL wsrep option: wsrep_applier_threads
wsrep_applier_UK_checks
","text":"Option Description Command Line: --wsrep-applier-UK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_UK_checks
variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks
variable.
Defines whether unique key checking is done for applier threads. This is disabled by default.
See also
MySQL wsrep option: wsrep_applier_UK_checks
wsrep_auto_increment_control
","text":"Option Description Command Line: --wsrep-auto-increment-control
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
Enables automatic adjustment of auto-increment system variables depending on the size of the cluster:
auto_increment_increment
controls the interval between successive AUTO_INCREMENT
column values
auto_increment_offset
determines the starting point for the AUTO_INCREMENT
column value
This helps prevent auto-increment replication conflicts across the cluster by giving each node its own range of auto-increment values. It is enabled by default.
Automatic adjustment may not be desirable depending on application\u2019s use and assumptions of auto-increments. It can be disabled in source-replica clusters.
See also
MySQL wsrep option: wsrep_auto_increment_control
wsrep_causal_reads
","text":"Option Description Command Line: --wsrep-causal-reads
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: OFF
In some cases, the source may apply events faster than a replica, which can cause source and replica to become out of sync for a brief moment. When this variable is set to ON
, the replica will wait until that event is applied before doing any other queries. Enabling this variable will result in larger latencies.
Note
This variable was deprecated because enabling it is the equivalent of setting wsrep_sync_wait
to 1
.
See also
MySQL wsrep option: wsrep_causal_reads
wsrep_certification_rules
","text":"Option Description Command Line: --wsrep-certification-rules
Config File: Yes Scope: Global Dynamic: Yes Values: STRICT, OPTIMIZED Default Value: STRICT This variable controls how certification is done in the cluster, in particular this affects how foreign keys are handled.
STRICT Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table may result in the certification failure.
OPTIMIZED Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table will not result in the certification failure.
See also
Galera Cluster Documentation: MySQL wsrep options
"},{"location":"wsrep-system-index.html#wsrep_certify_nonpk","title":"wsrep_certify_nonPK
","text":"Option Description Command Line: --wsrep-certify-nonpk
Config File: Yes Scope: Global Dynamic: No Default Value: ON
Enables automatic generation of primary keys for rows that don\u2019t have them. Write set replication requires primary keys on all tables to allow for parallel applying of transactions. This variable is enabled by default. As a rule, make sure that all tables have primary keys.
See also
MySQL wsrep option: wsrep_certify_nonPK
"},{"location":"wsrep-system-index.html#wsrep_cluster_address","title":"wsrep_cluster_address
","text":"Option Description Command Line: --wsrep-cluster-address
Config File: Yes Scope: Global Dynamic: Yes Defines the back-end schema, IP addresses, ports, and options that the node uses when connecting to the cluster. This variable needs to specify at least one other node\u2019s address, which is alive and a member of the cluster. In practice, it is best (but not necessary) to provide a complete list of all possible cluster nodes. The value should be of the following format:
<schema>://<address>[?<option1>=<value1>[&<option2>=<value2>]],...\n
The only back-end schema currently supported is gcomm
. The IP address can contain a port number after a colon. Options are specified after ?
and separated by &
. You can specify multiple addresses separated by commas.
For example:
wsrep_cluster_address=\"gcomm://192.168.0.1:4567?gmcast.listen_addr=0.0.0.0:5678\"\n
If an empty gcomm://
is provided, the node will bootstrap itself (that is, form a new cluster). It is not recommended to have empty cluster address in production config after the cluster has been bootstrapped initially. If you want to bootstrap a new cluster with a node, you should pass the --wsrep-new-cluster
option when starting.
See also
MySQL wsrep option: wsrep_cluster_address
"},{"location":"wsrep-system-index.html#wsrep_cluster_name","title":"wsrep_cluster_name
","text":"Option Description Command Line: --wsrep-cluster-name
Config File: Yes Scope: Global Dynamic: No Default Value: my_wsrep_cluster
Specifies the name of the cluster and must be identical on all nodes. A node checks the value when attempting to connect to the cluster. If the names match, the node connects.
Edit the value in the my.cnf
in the [galera] section.
[galera]\n\n wsrep_cluster_name=simple-cluster\n
Execute SHOW VARIABLES
with the LIKE operator to view the variable:
mysql> SHOW VARIABLES LIKE 'wsrep_cluster_name';\n
Expected output +--------------------+----------------+\n| Variable_name | Value |\n+--------------------+----------------+\n| wsrep_cluster_name | simple-cluster |\n+--------------------+----------------+\n
Note
It should not exceed 32 characters. A node cannot join the cluster if the cluster names do not match. You must re-bootstrap the cluster after a name change.
See also
MySQL wsrep option: wsrep_cluster_name
"},{"location":"wsrep-system-index.html#wsrep_data_home_dir","title":"wsrep_data_home_dir
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql
(or whatever path is specified by datadir
) Specifies the path to the directory where the wsrep provider stores its files (such as grastate.dat
).
See also
MySQL wsrep option: wsrep_data_home_dir
"},{"location":"wsrep-system-index.html#wsrep_dbug_option","title":"wsrep_dbug_option
","text":"Option Description Command Line: --wsrep-dbug-option
Config File: Yes Scope: Global Dynamic: Yes Defines DBUG
options to pass to the wsrep provider.
See also
MySQL wsrep option: wsrep_dbug_option
"},{"location":"wsrep-system-index.html#wsrep_debug","title":"wsrep_debug
","text":"Option Description Command Line: --wsrep-debug
Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE
Enables debug level logging for the database server and wsrep-lib
- an integration library for WSREP API with additional convenience for transaction processing. By default, --wsrep-debug
variable is disabled.
This variable can be used when trying to diagnose problems or when submitting a bug.
You can set wsrep_debug
in the following my.cnf
groups:
Under [mysqld]
it enables debug logging for mysqld
and the SST script.
Under [sst]
it enables debug logging for the SST script only.
This variable may be set to one of the following values:
NONE
No debug-level messages.
SERVER
wsrep-lib
general debug-level messages and detailed debug-level messages from the server_state part are printed out. Galera debug-level logs are printed out.
TRANSACTION
Same as SERVER + wsrep-lib transaction part
STREAMING
Same as TRANSACTION + wsrep-lib streaming part
CLIENT
Same as STREAMING + wsrep-lib client_service part
Note
Do not enable debugging in production environments, because it logs authentication info (that is, passwords).
See also
MySQL wsrep option: wsrep_debug
"},{"location":"wsrep-system-index.html#wsrep_desync","title":"wsrep_desync
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
Defines whether the node should participate in Flow Control. By default, this variable is disabled, meaning that if the receive queue becomes too big, the node engages in Flow Control: it works through the receive queue until it reaches a more manageable size. For more information, see wsrep_local_recv_queue
and wsrep_flow_control_interval
.
Enabling this variable will disable Flow Control for the node. It will continue to receive write-sets that it is not able to apply, the receive queue will keep growing, and the node will keep falling behind the cluster indefinitely.
Toggling this back to OFF
will require an IST or an SST, depending on how long it was desynchronized. This is similar to cluster desynchronization, which occurs during RSU TOI. Because of this, it\u2019s not a good idea to enable wsrep_desync
for a long period of time or for several nodes at once.
Note
You can also desync a node using the /\\*! WSREP_DESYNC \\*/
query comment.
See also
MySQL wsrep option: wsrep_desync
"},{"location":"wsrep-system-index.html#wsrep_dirty_reads","title":"wsrep_dirty_reads
","text":"Option Description Command Line: --wsrep-dirty-reads
Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: OFF
Defines whether the node accepts read queries when in a non-operational state, that is, when it loses connection to the Primary Component. By default, this variable is disabled and the node rejects all queries, because there is no way to tell if the data is correct.
If you enable this variable, the node will permit read queries (USE
, SELECT
, LOCK TABLE
, and UNLOCK TABLES
), but any command that modifies or updates the database on a non-operational node will still be rejected (including DDL and DML statements, such as INSERT
, DELETE
, and UPDATE
).
To avoid deadlock errors, set the wsrep_sync_wait
variable to 0
if you enable wsrep_dirty_reads
.
As of Percona XtraDB Cluster 8.0.26-16, you can update the variable with a set_var hint
.
mysql> SELECT @@wsrep_dirty_reads;\n
Expected output +-----------------------+\n| @@wsrep_dirty_reads |\n+=======================+\n| OFF |\n+-----------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_dirty_reads=ON) */ @@wsrep_dirty_reads;\n
Expected output +-----------------------+\n| @@wsrep_dirty_reads |\n+=======================+\n| ON |\n+-----------------------+\n
See also
MySQL wsrep option: wsrep_dirty_reads
"},{"location":"wsrep-system-index.html#wsrep_drupal_282555_workaround","title":"wsrep_drupal_282555_workaround
","text":"Option Description Command Line: --wsrep-drupal-282555-workaround
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
Enables a workaround for MySQL InnoDB bug that affects Drupal (Drupal bug #282555 and MySQL bug #41984). In some cases, duplicate key errors would occur when inserting the DEFAULT
value into an AUTO_INCREMENT
column.
See also
MySQL wsrep option: wsrep_drupal_282555_workaround
"},{"location":"wsrep-system-index.html#wsrep_forced_binlog_format","title":"wsrep_forced_binlog_format
","text":"Option Description Command Line: --wsrep-forced-binlog-format
Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE
Defines a binary log format that will always be effective, regardless of the client session binlog_format
variable value.
Possible values for this variable are:
ROW
: Force row-based logging format
STATEMENT
: Force statement-based logging format
MIXED
: Force mixed logging format
NONE
: Do not force the binary log format and use whatever is set by the binlog_format
variable (default)
See also
MySQL wsrep option: wsrep_forced_binlog_format
"},{"location":"wsrep-system-index.html#wsrep_ignore_apply_errors","title":"wsrep_ignore_apply_errors
","text":"Option Description Command Line: --wsrep-ignore-apply-errors
Config File: Yes Scope: Global Dynamic: Yes Default Value: 0 Defines the rules of wsrep applier behavior on errors. You can change the settings by editing the my.cnf
file under [mysqld]
or at runtime.
Note
In Percona XtraDB Cluster version 8.0.19-10, the default value has changed from 7
to 0
. If you have been working with an earlier version of the PXC 8.0 series, you may see different behavior when upgrading to this version or later.
The variable has the following options:
Value Description WSREP_IGNORE_ERRORS_NONE All replication errors are treated as errors and will shutdown the node (default behavior) WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL DROP DATABASE, DROP TABLE, DROP INDEX, ALTER TABLE are converted to a warning if they result in ER_DB_DROP_EXISTS, ER_BAD_TABLE_ERROR OR ER_CANT_DROP_FIELD_OR_KEY errors WSREP_IGNORE_ERRORS_ON_RECONCILING_DML DELETE events are treated as warnings if they failed because the deleted row was not found (ER_KEY_NOT_FOUND) WSREP_IGNORE_ERRORS_ON_DDL All DDL errors will be treated as a warning WSREP_IGNORE_ERRORS_MAX Infers WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML and WSREP_IGNORE_ERRORS_ON_DDLSetting the variable between 0
and 7
has the following behavior:
wsrep_min_log_verbosity
","text":"Option Description Command Line: --wsrep-min-log-verbosity
Config File: Yes Scope: Global Dynamic: Yes Default Value: 3 This variable defines the minimum logging verbosity of wsrep/Galera and acts in conjunction with the log_error_verbosity
variable. The wsrep_min_log_verbosity
has the same values as log_error_verbosity
.
The actual log verbosity of wsrep/Galera can be greater than the value of wsrep_min_log_verbosity
if log_error_verbosity
is greater than wsrep_min_log_verbosity
.
A few examples:
log_error_verbosity wsrep_min_log_verbosity MySQL Logs Verbosity wsrep Logs Verbosity 2 3 system error, warning system error, warning, info 1 3 system error system error, warning, info 1 2 system error system error, warning 3 1 system error, warning, info system error, warning, infoNote the case where log_error_verbosity=3
and wsrep_min_log_verbosity=1
. The actual log verbosity of wsrep/Galera is 3 (system error, warning, info) because log_error_verbosity
is greater.
See also
MySQL Documentation: log_error_verbosity
Galera Cluster Documentation: Database Server Logs
"},{"location":"wsrep-system-index.html#wsrep_load_data_splitting","title":"wsrep_load_data_splitting
","text":"Option Description Command Line: --wsrep-load-data-splitting
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
Defines whether the node should split large LOAD DATA
transactions. This variable is enabled by default, meaning that LOAD DATA
commands are split into transactions of 10 000 rows or less.
If you disable this variable, then huge data loads may prevent the node from completely rolling the operation back in the event of a conflict, and whatever gets committed stays committed.
Note
It doesn\u2019t work as expected with autocommit=0
when enabled.
See also
MySQL wsrep option: wsrep_load_data_splitting
"},{"location":"wsrep-system-index.html#wsrep_log_conflicts","title":"wsrep_log_conflicts
","text":"Option Description Command Line: --wsrep-log-conflicts
Config File: Yes Scope: Global Dynamic: No Default Value: OFF
Defines whether the node should log additional information about conflicts. By default, this variable is disabled and Percona XtraDB Cluster uses standard logging features in MySQL.
If you enable this variable, it will also log table and schema where the conflict occurred, as well as the actual values for keys that produced the conflict.
See also
MySQL wsrep option: wsrep_log_conflicts
"},{"location":"wsrep-system-index.html#wsrep_max_ws_rows","title":"wsrep_max_ws_rows
","text":"Option Description Command Line: --wsrep-max-ws-rows
Config File: Yes Scope: Global Dynamic: Yes Default Value: 0
(no limit) Defines the maximum number of rows each write-set can contain.
By default, there is no limit for the maximum number of rows in a write-set. The maximum allowed value is 1048576
.
See also
MySQL wsrep option: wsrep_max_ws_rows
"},{"location":"wsrep-system-index.html#wsrep_max_ws_size","title":"wsrep_max_ws_size
","text":"Option Description Command Line: --wsrep_max_ws_size
Config File: Yes Scope: Global Dynamic: Yes Default Value: 2147483647
(2 GB) Defines the maximum write-set size (in bytes). Anything bigger than the specified value will be rejected.
You can set it to any value between 1024
and the default 2147483647
.
See also
MySQL wsrep option: wsrep_max_ws_size
"},{"location":"wsrep-system-index.html#wsrep_mode","title":"wsrep_mode
","text":"Option Description Command Line: --wsrep-mode
Config File: Yes Scope: Global Dynamic: Yes Default Value: This variable has been implemented in Percona XtraDB Cluster 8.0.31.
Defines the node behavior according to a specified value. The value is empty or disabled by default.
The available values are:
Empty
- does not change the node behavior.
IGNORE_NATIVE_REPLICATION_FILTER_RULES
- changes the wsrep
behavior to ignore native replication filter rules.
See also
MySQL wsrep option: wsrep_mode
"},{"location":"wsrep-system-index.html#wsrep_node_address","title":"wsrep_node_address
","text":"Option Description Command Line: --wsrep-node-address
Config File: Yes Scope: Global Dynamic: No Default Value: IP of the first network interface (eth0
) and default port (4567
) Specifies the network address of the node. By default, this variable is set to the IP address of the first network interface (usually eth0
or enp2s0
) and the default port (4567
).
While default value should be correct in most cases, there are situations when you need to specify it manually. For example:
Servers with multiple network interfaces
Servers that run multiple nodes
Network Address Translation (NAT)
Clusters with nodes in more than one region
Container deployments, such as Docker
Cloud deployments, such as Amazon EC2 (use the global DNS name instead of the local IP address)
The value should be specified in the following format:
<ip_address>[:port]\n
Note
The value of this variable is also used as the default value for the wsrep_sst_receive_address
variable and the ist.recv_addr
option.
See also
MySQL wsrep option: wsrep_node_address
"},{"location":"wsrep-system-index.html#wsrep_node_incoming_address","title":"wsrep_node_incoming_address
","text":"Option Description Command Line: --wsrep-node-incoming-address
Config File: Yes Scope: Global Dynamic: No Default Value: AUTO
Specifies the network address from which the node expects client connections. By default, it uses the IP address from wsrep_node_address
and port number 3306.
This information is used for the wsrep_incoming_addresses
variable which shows all active cluster nodes.
See also
MySQL wsrep option: wsrep_node_incoming_address
"},{"location":"wsrep-system-index.html#wsrep_node_name","title":"wsrep_node_name
","text":"Option Description Command Line: --wsrep-node-name
Config File: Yes Scope: Global Dynamic: Yes Default Value: The node\u2019s host name Defines a unique name for the node. Defaults to the host name.
In many situations, you may use the value of this variable as a means to identify the given node in the cluster as the alternative to using the node address (the value of the wsrep_node_address
).
Note
The variable wsrep_sst_donor
is an example where you may only use the value of wsrep_node_name
and the node address is not permitted.
wsrep_notify_cmd
","text":"Option Description Command Line: --wsrep-notify-cmd
Config File: Yes Scope: Global Dynamic: No Specifies the notification command that the node should execute whenever cluster membership or local node status changes. This can be used for alerting or to reconfigure load balancers.
Note
The node will block and wait until the command or script completes and returns before it can proceed. If the script performs any potentially blocking or long-running operations, such as network communication, you should consider initiating such operations in the background and have the script return immediately.
See also
MySQL wsrep option: wsrep_notify_cmd
"},{"location":"wsrep-system-index.html#wsrep_on","title":"wsrep_on
","text":"Option Description Command Line: No Config File: No Scope: Session Dynamic: Yes Default Value: ON
Defines if current session transaction changes for a node are replicated to the cluster.
If set to OFF
for a session, no transaction changes are replicated in that session. The setting does not cause the node to leave the cluster, and the node communicates with other nodes.
See also
MySQL wsrep option: wsrep_on
"},{"location":"wsrep-system-index.html#wsrep_osu_method","title":"wsrep_OSU_method
","text":"Option Description Command Line: --wsrep-OSU-method
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: TOI
Defines the method for Online Schema Upgrade that the node uses to replicate DDL statements.
For information on the available methods, see Online Schema upgrade and for information on Non-blocking operations, see NBO.
See also
MySQL wsrep option: wsrep_OSU_method
"},{"location":"wsrep-system-index.html#wsrep_provider","title":"wsrep_provider
","text":"Option Description Command Line: --wsrep-provider
Config File: Yes Scope: Global Dynamic: No Specifies the path to the Galera library. This is usually /usr/lib64/libgalera_smm.so
on CentOS/RHEL and /usr/lib/libgalera_smm.so
on Debian/Ubuntu.
If you do not specify a path or the value is not valid, the node will behave as standalone instance of MySQL.
See also
MySQL wsrep option: wsrep_provider
"},{"location":"wsrep-system-index.html#wsrep_provider_options","title":"wsrep_provider_options
","text":"Option Description Command Line: --wsrep-provider-options
Config File: Yes Scope: Global Dynamic: No Specifies optional settings for the replication provider documented in Index of :variable:`wsrep_provider` options. These options affect how various situations are handled during replication.
See also
MySQL wsrep option: wsrep_provider_options
"},{"location":"wsrep-system-index.html#wsrep_recover","title":"wsrep_recover
","text":"Option Description Command Line: --wsrep-recover
Config File: Yes Scope: Global Dynamic: No Default Value: OFF
Location: mysqld_safe` Recovers database state after crash by parsing GTID from the log. If the GTID is found, it will be assigned as the initial position for server.
"},{"location":"wsrep-system-index.html#wsrep_reject_queries","title":"wsrep_reject_queries
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE
Defines whether the node should reject queries from clients. Rejecting queries can be useful during upgrades, when you want to keep the node up and apply write-sets without accepting queries.
When a query is rejected, the following error is returned:
Error 1047: Unknown command\n
The following values are available:
NONE
: Accept all queries from clients (default)
ALL
: Reject all new queries from clients, but maintain existing client connections
ALL_KILL
: Reject all new queries from clients and kill existing client connections
Note
This variable doesn\u2019t affect Galera replication in any way, only the applications that connect to the database are affected. If you want to desync a node, use wsrep_desync
.
See also
MySQL wsrep option: wsrep_reject_queries
"},{"location":"wsrep-system-index.html#wsrep_replicate_myisam","title":"wsrep_replicate_myisam
","text":"Option Description Command Line: --wsrep-replicate-myisam
Config File: Yes Scope: Session, Global Dynamic: No Default Value: OFF
Defines whether DML statements for MyISAM tables should be replicated. It is disabled by default, because MyISAM replication is still experimental.
On the global level, wsrep_replicate_myisam
can be set only during startup. On session level, you can change it during runtime as well.
For older nodes in the cluster, wsrep_replicate_myisam
should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended when wsrep_replicate_myisam
is disabled, since if any table in the list is MyISAM, the whole DDL statement is not put under TOI.
Note
You should keep in mind the following when using MyISAM replication:
DDL (CREATE/DROP/TRUNCATE) statements on MyISAM will be replicated irrespective of wsrep_replicate_myisam
value
DML (INSERT/UPDATE/DELETE) statements on MyISAM will be replicated only ifwsrep_replicate_myisam
is enabled
SST will get full transfer irrespective of wsrep_replicate_myisam
value (it will get MyISAM tables from donor)
Difference in configuration of pxc-cluster
node on enforce_storage_engine front may result in picking up different engine for the same table on different nodes
CREATE TABLE AS SELECT
(CTAS) statements use TOI replication. MyISAM tables are created and loaded even if wsrep_replicate_myisam
is set to ON.
wsrep_restart_replica
","text":"Option Description Command Line: --wsrep-restart-replica
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave
variable is deprecated in favor of this variable.
Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.
See also
MySQL wsrep option: wsrep_restart_slave
"},{"location":"wsrep-system-index.html#wsrep_restart_slave","title":"wsrep_restart_slave
","text":"Option Description Command Line: --wsrep-restart-slave
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave
variable is deprecated and may be removed in later versions. Use wsrep_restart_replica
.
Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.
"},{"location":"wsrep-system-index.html#wsrep_retry_autocommit","title":"wsrep_retry_autocommit
","text":"Option Description Command Line: --wsrep-retry-autocommit
Config File: Yes Scope: Global Dynamic: No Default Value: 1
Specifies the number of times autocommit transactions will be retried in the cluster if it encounters certification errors. In case there is a conflict, it should be safe for the cluster node to simply retry the statement without returning an error to the client, hoping that it will pass next time.
This can be useful to help an application using autocommit to avoid deadlock errors that can be triggered by replication conflicts.
If this variable is set to 0
, autocommit transactions won\u2019t be retried.
See also
MySQL wsrep option: wsrep_retry_autocommit
"},{"location":"wsrep-system-index.html#wsrep_rsu_commit_timeout","title":"wsrep_RSU_commit_timeout
","text":"Option Description Command Line: --wsrep-RSU-commit-timeout
Config File: Yes Scope: Global Dynamic: Yes Default Value: 5000
Range: From 5000
(5 milliseconds) to 31536000000000
(365 days) Specifies the timeout in microseconds to allow active connection to complete COMMIT action before starting RSU.
While running RSU it is expected that user has isolated the node and there is no active traffic executing on the node. RSU has a check to ensure this, and waits for any active connection in COMMIT
state before starting RSU.
By default this check has timeout of 5 milliseconds, but in some cases COMMIT is taking longer. This variable sets the timeout, and has allowed values from the range of (5 milliseconds, 365 days). The value is to be set in microseconds. Unit of variable is in micro-secs so set accordingly.
Note
RSU operation will not auto-stop node from receiving active traffic. So there could be a continuous flow of active traffic while RSU continues to wait, and that can result in RSU starvation. User is expected to block active RSU traffic while performing operation.
"},{"location":"wsrep-system-index.html#wsrep_slave_fk_checks","title":"wsrep_slave_FK_checks
","text":"Option Description Command Line: --wsrep-slave-FK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_FK_checks
variable.
Defines whether foreign key checking is done for applier threads. This is enabled by default.
"},{"location":"wsrep-system-index.html#wsrep_slave_threads","title":"wsrep_slave_threads
","text":"Option Description Command Line: --wsrep-slave-threads
Config File: Yes Scope: Global Dynamic: Yes Default Value: 1
As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads
variable.
Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.
Note
When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).
If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1
to see if that resolves the issue. The default value can be increased for better throughput.
You may want to increase it as suggested in Codership documentation for flow control
: when the node is in JOINED
state, increasing the number of replica threads can speed up the catchup to SYNCED
.
You can also estimate the optimal value for this from wsrep_cert_deps_distance
as suggested in the Galera Cluster documentation.
For more configuration tips, see this document.
"},{"location":"wsrep-system-index.html#wsrep_slave_uk_checks","title":"wsrep_slave_UK_checks
","text":"Option Description Command Line: --wsrep-slave-UK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks
variable.
Defines whether unique key checking is done for applier threads. This is disabled by default.
"},{"location":"wsrep-system-index.html#wsrep_sr_store","title":"wsrep_SR_store
","text":"Option Description Command Line: --wsrep-sr-store
Config File: Yes Scope: Global Dynamic: No Default Value: table
Defines storage for streaming replication fragments. The available values are table
, the default value, and none
, which disables the variable.
wsrep_sst_allowed_methods
","text":"Option Description Command Line: --wsrep_sst_allowed_methods
Config File: Yes Scope: Global Dynamic: No Default Value: xtrabackup-v2
Percona XtraDB Cluster 8.0.20-11.3 adds this variable.
This variable limits SST methods accepted by the server for wsrep_sst_method variable. The default value is xtrabackup-v2
.
wsrep_sst_donor
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Specifies a list of nodes (using their wsrep_node_name
values) that the current node should prefer as donors for SST and IST.
Warning
Using IP addresses of nodes instead of node names (the value of wsrep_node_name
) as values of wsrep_sst_donor
results in an error.
ERROR] WSREP: State transfer request failed unrecoverably: 113 (No route\nto host). Most likely it is due to inability to communicate with the\ncluster primary component. Restart required.\n
If the value is empty, the first node in SYNCED state in the index becomes the donor and will not be able to serve requests during the state transfer.
To consider other nodes if the listed nodes are not available, add a comma at the end of the list, for example:
wsrep_sst_donor=node1,node2,\n
If you remove the trailing comma from the previous example, then the joining node will consider only node1
and node2
.
Note
By default, the joiner node does not wait for more than 100 seconds to receive the first packet from a donor. This is implemented via the sst-initial-timeout
option. If you set the list of preferred donors without the trailing comma or believe that all nodes in the cluster can often be unavailable for SST (this is common for small clusters), then you may want to increase the initial timeout (or disable it completely if you don\u2019t mind the joiner node waiting for the state transfer indefinitely).
See also
MySQL wsrep option: wsrep_sst_donor
"},{"location":"wsrep-system-index.html#wsrep_sst_method","title":"wsrep_sst_method
","text":"Option Description Command Line: --wsrep-sst-method
Config File: Yes Scope: Global Dynamic: Yes Default Value: xtrabackup-v2 Defines the method or script for State Snapshot Transfer (SST).
Available values are:
xtrabackup-v2
: Uses Percona XtraBackup to perform SST. This value is the default. Privileges and permissions for running Percona XtraBackup can be found in Percona XtraBackup documentation. For more information, see Percona XtraBackup SST Configuration.
skip
: Use this to skip SST. Removed in Percona XtraDB Cluster 8.0.33-25. This value can be used when initially starting the cluster and manually restoring the same data to all nodes. This value should not be used permanently because it could lead to data inconsistency across the nodes.
ist_only
: Introduced in Percona XtraDB Cluster 8.0.33-25. This value allows only Incremental State Transfer (IST). If a node cannot sync with the cluster with IST, abort that node\u2019s start. This action leaves the data directory unchanged. This value prevents starting a node, after a manual backup restoration, that does not have a grastate.dat
file. This missing file could initiate a full-state transfer (SST) which can be a more time and resource-intensive operation.
Note
xtrabackup-v2
provides support for clusters with GTIDs and async replicas.
See also
MySQL wsrep option: wsrep_sst_method
"},{"location":"wsrep-system-index.html#wsrep_sst_receive_address","title":"wsrep_sst_receive_address
","text":"Option Description Command Line: --wsrep-sst-receive-address
Config File: Yes Scope: Global Dynamic: Yes Default Value: AUTO
Specifies the network address where donor node should send state transfers. By default, this variable is set to AUTO
, meaning that the IP address from wsrep_node_address
is used.
See also
MySQL wsrep option: wsrep_sst_receive_address
"},{"location":"wsrep-system-index.html#wsrep_start_position","title":"wsrep_start_position
","text":"Option Description Command Line: --wsrep-start-position
Config File: Yes Scope: Global Dynamic: Yes Default Value: 00000000-0000-0000-0000-00000000000000:-1
Specifies the node\u2019s start position as UUID:seqno
. By setting all the nodes to have the same value for this variable, the cluster can be set up without the state transfer.
See also
MySQL wsrep option: wsrep_start_position
"},{"location":"wsrep-system-index.html#wsrep_sync_wait","title":"wsrep_sync_wait
","text":"Option Description Command Line: --wsrep-sync-wait
Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: 0
Controls cluster-wide causality checks on certain statements. Checks ensure that the statement is executed on a node that is fully synced with the cluster.
As of Percona XtraDB Cluster 8.0.26-16, you are able to update the variable with a set_var hint.
mysql> SELECT @@wsrep_sync_wait;\n
Expected output +---------------------+\n| @@wsrep_sync_wait |\n+=====================+\n| 3 |\n+---------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_sync_wait=7) */ @@wsrep_sync_wait;\n
Expected output +---------------------+\n| @@wsrep_sync_wait |\n+=====================+\n| 7 |\n+---------------------+\n
Note
Causality checks of any type can result in increased latency.
The type of statements to undergo checks is determined by bitmask:
0
: Do not run causality checks for any statements. This is the default.
1
: Perform checks for READ
statements (including SELECT
, SHOW
, and BEGIN
or START TRANSACTION
).
2
: Perform checks for UPDATE
and DELETE
statements.
3
: Perform checks for READ
, UPDATE
, and DELETE
statements.
4
: Perform checks for INSERT
and REPLACE
statements.
5
: Perform checks for READ
, INSERT
, and REPLACE
statements.
6
: Perform checks for UPDATE
, DELETE
, INSERT
, and REPLACE
statements.
7
: Perform checks for READ
, UPDATE
, DELETE
, INSERT
, and REPLACE
statements.
Note
Setting wsrep_sync_wait
to 1
is the equivalent of setting the deprecated wsrep_causal_reads
to ON
.
See also
MySQL wsrep option: wsrep_sync_wait
"},{"location":"wsrep-system-index.html#wsrep_trx_fragment_size","title":"wsrep_trx_fragment_size
","text":"Option Description Command Line: --wsrep-trx-fragment-size
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: 0 Defines the the streaming replication fragment size. This variable is measured in the value defined by wsrep_trx_fragment_unit
. The minimum value is 0 and the maximum value is 2147483647.
As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.
mysql> SELECT @@@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_unit |\n+==============================+\n| statements |\n+------------------------------+\n| @@wsrep_trx_fragment_size |\n+------------------------------+\n| 3 |\n+------------------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_size=5) */ @@wsrep_trx_fragment_size;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_size |\n+==============================+\n| 5 |\n+------------------------------+\n
You can also use set_var() in a data manipulation language (DML) statement. This ability is useful when streaming large statements within a transaction.
node1> BEGIN;\nQuery OK, 0 rows affected (0.00 sec)\n\nnode1> INSERT /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ INTO t1 SELECT * FROM t1; \nQuery OK, 65536 rows affected (15.15 sec)\nRecords: 65536 Duplicates: 0 Warnings: 0\n\nnode1> UPDATE /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ t1 SET i=2;\nQuery OK, 131072 rows affected (1 min 35.93 sec)\nRows matched: 131072 Changed: 131072 Warnings: 0\n\nnode2> SET SESSION TRANSACTION_ISOLATION = 'READ-UNCOMMITTED';\nQuery OK, 0 rows affected (0.00 sec)\n\nnode2> SELECT * FROM t1 LIMIT 5;\n+---+\n| i |\n+===+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\nnode1> DELETE /*+SET_VAR(wsrep_trx_fragment_size = 10000)*/ FROM t1;\nQuery OK, 131072 rows affected (15.09 sec)\n
"},{"location":"wsrep-system-index.html#wsrep_trx_fragment_unit","title":"wsrep_trx_fragment_unit
","text":"Option Description Command Line: --wsrep-trx-fragment-unit
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: \u201cbytes\u201d Defines the type of measure for the wsrep_trx_fragment_size
. The possible values are: bytes, rows, statements.
As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.
mysql> SELECT @@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_unit |\n+==============================+\n| statements |\n+------------------------------+\n| @@wsrep_trx_fragment_size |\n+------------------------------+\n| 3 |\n+------------------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_unit=rows) */ @@wsrep_trx_fragment_unit;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_unit |\n+==============================+\n| rows |\n+------------------------------+\n
"},{"location":"wsrep-system-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"xtrabackup-sst.html","title":"Percona XtraBackup SST configuration","text":"Percona XtraBackup SST works in two stages:
First it identifies the type of data transfer based on the presence of xtrabackup_ist
file on the joiner node.
Then it starts data transfer. In case of SST, it empties the data directory except for some files (galera.cache
, sst_in_progress
, grastate.dat
) and then proceeds with SST.
In case of IST, it proceeds as before.
The following options specific to SST can be used in my.cnf
under [sst]
.
Note
Non-integer options which have no default value are disabled if not set.
:Match: Yes
implies that option should match on donor and joiner nodes.
SST script reads my.cnf
when it runs on either donor or joiner node, not during mysqld
startup.
SST options must be specified in the main my.cnf
file.
Used to specify the Percona XtraBackup streaming format. The only option is the xbstream
format. SST fails and generates an error when another format, such as tar
, is used.
For more information about the xbstream
format, see The xbstream Binary.
socat
, nc
Default: socat
Match: Yes Used to specify the data transfer format. The recommended value is the default transferfmt=socat
because it allows for socket options, such as transfer buffer sizes. For more information, see socat(1).
Note
Using transferfmt=nc
does not support the SSL-based encryption mode (value 4
for the encrypt
option).
Example: ssl-ca=/etc/ssl/certs/mycert.crt
Specifies the absolute path to the certificate authority (CA) file for socat
encryption based on OpenSSL.
Example: ssl-cert=/etc/ssl/certs/mycert.pem
Specifies the full path to the certificate file in the PEM format for socat
encryption based on OpenSSL.
Note
For more information about ssl-ca
and ssl-cert
, see https://www.dest-unreach.org/socat/doc/socat-openssltunnel.html. The ssl-ca
is essentially a self-signed certificate in that example, and ssl-cert
is the PEM file generated after concatenation of the key and the certificate generated earlier. The names of options were chosen to be compatible with socat
parameter names as well as with MySQL\u2019s SSL authentication. For testing you can also download certificates from launchpad.
Note
Irrespective of what is shown in the example, you can use the same .crt and .pem files on all nodes and it will work, since there is no server-client paradigm here, but rather a cluster with homogeneous nodes.
"},{"location":"xtrabackup-sst.html#ssl-key","title":"ssl-key","text":"Example: ssl-key=/etc/ssl/keys/key.pem
Used to specify the full path to the private key in PEM format for socat encryption based on OpenSSL.
"},{"location":"xtrabackup-sst.html#encrypt","title":"encrypt","text":"Parameter Description Values: 0, 4 Default: 4 Match: YesEnables SST encryption mode in Percona XtraBackup:
Set encrypt=0
to disable SST encryption.
Set encrypt=4
for SST encryption with SSL files generated by MySQL. This is the recommended value.
Considering that you have all three necessary files:
[sst]\nencrypt=4\nssl-ca=ca.pem\nssl-cert=server-cert.pem\nssl-key=server-key.pem\n
For more information, see Encrypting PXC Traffic.
"},{"location":"xtrabackup-sst.html#sockopt","title":"sockopt","text":"Used to specify key/value pairs of socket options, separated by commas, for example:
[sst]\nsockopt=\"retry=2,interval=3\"\n
The previous example causes socat to try to connect three times (initial attempt and two retries with a 3-second interval between attempts).
This option only applies when socat is used (transferfmt=socat
). For more information about socket options, see socat (1).
Note
You can also enable SSL based compression with sockopt
. This can be used instead of the Percona XtraBackup compress
option.
Used to specify socket options for the netcat
transfer format (transferfmt=nc
).
Values: 1, path/to/file
Used to specify where to write SST progress. If set to 1
, it writes to MySQL stderr
. Alternatively, you can specify the full path to a file. If this is a FIFO, it needs to exist and be open on reader end before itself, otherwise wsrep_sst_xtrabackup
will block indefinitely.
Note
Value of 0 is not valid.
"},{"location":"xtrabackup-sst.html#rebuild","title":"rebuild","text":"Parameter Description Values: 0, 1 Default: 0Used to enable rebuilding of index on joiner node. This is independent of compaction, though compaction enables it. Rebuild of indexes may be used as an optimization.
Note
#1192834 affects this option.
"},{"location":"xtrabackup-sst.html#time","title":"time","text":"Parameter Description Values: 0, 1 Default: 0Enabling this option instruments key stages of backup and restore in SST.
"},{"location":"xtrabackup-sst.html#rlimit","title":"rlimit","text":"Example: rlimit=128k
Used to set a a ratelimit in bytes. Add a suffix (k, m, g, t) to specify units. For example, 128k
is 128 kilobytes. For more information, see pv(1).
Note
Rate is limited on donor node. The rationale behind this is to not allow SST to saturate the donor\u2019s regular cluster operations or to limit the rate for other purposes.
"},{"location":"xtrabackup-sst.html#use_extra","title":"use_extra","text":"Parameter Description Values: 0, 1 Default: 0Used to force SST to use the thread pool\u2019s extra_port. Make sure that thread pool is enabled and the extra_port
option is set in my.cnf
before you enable this option.
Default: '.\\*\\\\.pem$\\\\|.\\*init\\\\.ok$\\\\|.\\*galera\\\\.cache$\\\\|.\\*sst_in_progress$\\\\|.\\*\\\\.sst$\\\\|.\\*gvwstate\\\\.dat$\\\\|.\\*grastate\\\\.dat$\\\\|.\\*\\\\.err$\\\\|.\\*\\\\.log$\\\\|.\\*RPM_UPGRADE_MARKER$\\\\|.\\*RPM_UPGRADE_HISTORY$'
Used to define the files that need to be retained in the datadir before running SST, so that the state of the other node can be restored cleanly.
For example:
[sst]\ncpat='.*galera\\.cache$\\|.*sst_in_progress$\\|.*grastate\\.dat$\\|.*\\.err$\\|.*\\.log$\\|.*RPM_UPGRADE_MARKER$\\|.*RPM_UPGRADE_HISTORY$\\|.*\\.xyz$'\n
Note
This option can only be used when wsrep_sst_method
is set to xtrabackup-v2
(which is the default value).
Stream-based compression and decompression are performed on the stream, in contrast to performing decompression after streaming to disk, which involves additional I/O. The savings are considerable, up to half the I/O on the JOINER node.
You can use any compression utility which works on stream: gzip
, pigz
, zstd
, and others. The pigz
or zstd
options are multi-threaded. At a minimum, the compressor must be set on the DONOR and the decompressor on JOINER.
You must install the related binaries, otherwise SST aborts.
compressor=\u2019pigz\u2019 decompressor=\u2019pigz -dc\u2019
compressor=\u2019gzip\u2019 decompressor=\u2019gzip -dc\u2019
To revert to the XtraBackup-based compression, set compress
under [xtrabackup]
. You can define both the compressor and the decompressor, although you will be wasting CPU cycles.
[xtrabackup]\ncompress\n\n-- compact has led to some crashes\n
"},{"location":"xtrabackup-sst.html#inno-backup-opts","title":"inno-backup-opts","text":""},{"location":"xtrabackup-sst.html#inno-apply-opts","title":"inno-apply-opts","text":""},{"location":"xtrabackup-sst.html#inno-move-opts","title":"inno-move-opts","text":"Parameter Description Default: Empty Type: Quoted String This group of options is used to pass XtraBackup options for backup, apply, and move stages. The SST script doesn\u2019t alter, tweak, or optimize these options.
Note
Although these options are related to XtraBackup SST, they cannot be specified in my.cnf
, because they are for passing innobackupex options.
This option is used to configure initial timeout (in seconds) to receive the first packet via SST. This has been implemented, so that if the donor node fails somewhere in the process, the joiner node will not hang up and wait forever.
By default, the joiner node will not wait for more than 100 seconds to get a donor node. The default should be sufficient, however, it is configurable, so you can set it appropriately for your cluster. To disable initial SST timeout, set sst-initial-timeout=0
.
Note
If you are using wsrep_sst_donor
, and you want the joiner node to strictly wait for donors listed in the variable and not fall back (that is, without a terminating comma at the end), and there is a possibility of all nodes in that variable to be unavailable, disable initial SST timeout or set it to a higher value (maximum threshold that you want the joiner node to wait). You can also disable this option (or set it to a higher value) if you believe all other nodes in the cluster can potentially become unavailable at any point in time (mostly in small clusters) or there is a high network latency or network disturbance (which can cause donor selection to take longer than 100 seconds).
This option configures the time the SST operation waits on the joiner to receive more data. The size of the joiner\u2019s sst directory is checked for the amount of data received. For example, the directory has received 50MB of data. The operation rechecks the data size after the default value, 120 seconds, has elapsed. If the data size is still 50MB, this operation is aborted. If the data has increased, the operation continues.
An example of setting the option:
[sst]\nsst-idle-timeout=0\n
"},{"location":"xtrabackup-sst.html#tmpdir","title":"tmpdir","text":"Parameter Description Default: Empty Unit: /path/to/tmp/dir This option specifies the location for storing the temporary file on a donor node where the transaction log is stored before streaming or copying it to a remote host.
Note
This option can be used on joiner node to specify non-default location to receive temporary SST files. This location must be large enough to hold the contents of the entire database. If tmpdir is empty then default location datadir/.sst will be used.
The tmpdir
option can be set in the following my.cnf
groups:
[sst]
is the primary location (others are ignored)
[xtrabackup]
is the secondary location (if not specified under [sst]
)
[mysqld]
is used if it is not specified in either of the above
wsrep_debug
Specifies whether additional debugging output for the database server error log should be enabled. Disabled by default.
This option can be set in the following my.cnf
groups:
Under [mysqld]
it enables debug logging for mysqld
and the SST script
Under [sst]
it enables debug logging for the SST script only
4
Specifies the number of threads that XtraBackup should use for encrypting data (when encrypt=1
). The value is passed using the --encrypt-threads
option in XtraBackup.
This option affects only SST with XtraBackup and should be specified under the [sst]
group.
4
Specifies the number of threads that XtraBackup should use to create backups. See the --parallel
option in XtraBackup.
This option affects only SST with XtraBackup and should be specified under the [sst]
group.
Each suppored version of Percona XtraDB Cluster is tested against a specific version of Percona XtraBackup:
Percona XtraDB Cluster 5.6 requires Percona XtraBackup 2.3
Percona XtraDB Cluster 5.7 requires Percona XtraBackup 2.4
Percona XtraDB Cluster 8.0 requires Percona XtraBackup 8.0
Other combinations are not guaranteed to work.
The following are optional dependencies of Percona XtraDB Cluster introduced by wsrep_sst_xtrabackup-v2
(except for obvious and direct dependencies):
qpress
for decompression. It is an optional dependency of Percona XtraBackup and it is available in our software repositories.
my_print_defaults
to extract values from my.cnf
. Provided by the server package.
openbsd-netcat
or socat
for transfer. socat
is a direct dependency of Percona XtraDB Cluster and it is the default.
xbstream
or tar
for streaming. xbstream
is the default.
pv
is required for progress
and rlimit
.
mkfifo
is required for progress
. Provided by coreutils
.
mktemp
is required. Provided by coreutils
.
which
is required.
Settings related to XtraBackup-based Encryption are no longer allowed in PXC 8.0 when used for SST. If it is detected that XtraBackup-based Encryption is enabled, PXC will produce an error.
The XtraBackup-based Encryption is enabled when you specify any of the following options under [xtrabackup]
in my.cnf
:
encrypt
encrypt-key
encrypt-key-file
The amount of memory for XtraBackup is defined by the --use-memory
option. You can pass it using the inno-apply-opts
option under [sst]
as follows:
[sst]\ninno-apply-opts=\"--use-memory=500M\"\n
If it is not specified, the use-memory
option under [xtrabackup]
will be used:
[xtrabackup]\nuse-memory=32M\n
If neither of the above are specified, the size of the InnoDB memory buffer will be used:
[mysqld]\ninnodb_buffer_pool_size=24M\n
"},{"location":"xtrabackup-sst.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"xtradb-cluster-version-numbers.html","title":"Understand version numbers","text":"A version number identifies the product release. The product contains the latest Generally Available (GA) features at the time of that release.
8.0.20 -11. 2 Base version Minor build Custom buildPercona uses semantic version numbering, which follows the pattern of base version, minor build, and an optional custom build. Percona assigns unique, non-negative integers in increasing order for each minor build release. The version number combines the base Percona Server for MySQL version number, the minor build version, and the custom build version, if needed.
The version numbers for Percona XtraDB Cluster 8.0.20-11.2 define the following information:
Base version - the leftmost set of numbers that indicate the Percona Server for MySQL version used as a base. An increase in the base version resets the minor build version and the custom build version to 0.
Minor build version - an internal number that increases with every Percona XtraDB Cluster release, and the custom build number is reset to 0.
Custom build version - an optional number assigned to custom builds used for bug fixes. The features don\u2019t change unless the fixes include those features. For example, Percona XtraDB Cluster 8.0.20-11.1, 8.0.20-11.2, and 8.0.20-11.3 are based on the same Percona Server for MySQL version and minor build version but are custom build versions.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"yum.html","title":"Install Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS","text":"A list of the supported platforms by products and versions is available in Percona Software and Platform Lifecycle.
We gather Telemetry data in the Percona packages and Docker images.
You can install Percona XtraDB Cluster with the following methods:
Use the official repository using YUM
Download and manually install the Percona XtraDB Cluster packages from Percona Product Downloads.
Use the Percona Software repositories
This documentation describes using the Percona Software repositories.
"},{"location":"yum.html#prerequisites","title":"Prerequisites","text":"Installing Percona XtraDB Cluster requires that you either are logged in as a user with root privileges or can run commands with sudo.
Percona XtraDB Cluster requires the specific ports for communication. Make sure that the following ports are available:
3306
4444
4567
4568
For information on SELinux, see Enabling SELinux.
"},{"location":"yum.html#install-from-percona-software-repository","title":"Install from Percona Software Repository","text":"For more information on the Percona Software repositories and configuring Percona Repositories with percona-release
, see the Percona Software Repositories Documentation.
$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release enable-only pxc-80 release\n$ sudo percona-release enable tools release\n$ sudo yum install percona-xtradb-cluster\n
$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release setup pxc-80\n$ sudo yum install percona-xtradb-cluster\n
"},{"location":"yum.html#after-installation","title":"After installation","text":"After the installation, start the mysql
service and find the temporary password using the grep
command.
$ sudo service mysql start\n$ sudo grep 'temporary password' /var/log/mysqld.log\n
Use the temporary password to log into the server:
$ mysql -u root -p\n
Run an ALTER USER
statement to change the temporary password, exit the client, and stop the service.
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass';\nmysql> exit\n$ sudo service mysql stop\n
"},{"location":"yum.html#next-steps","title":"Next steps","text":"Configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.
"},{"location":"yum.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.29-21.html","title":"Percona XtraDB Cluster 8.0.29-21 (2022-09-12)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.29-21.html#release-highlights","title":"Release Highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.29-21 (2022-08-08) release notes.
The improvements and bug fixes for MySQL 8.0.29, provided by Oracle, and included in Percona Server for MySQL are the following:
The Performance Schema tracks if a query was processed on the PRIMARY engine, InnoDB, or a SECONDARY engine, HeatWave. An EXECUTION_ENGINE column, which indicates the engine used, was added to the Performance Schema statement event tables and the sys.processlist and the sys.x$processlist views.
Added support for the IF NOT EXISTS option for the CREATE FUNCTION, CREATE PROCEDURE, and CREATE TRIGGER statements.
Added support for ALTER TABLE \u2026 DROP COLUMN ALGORITHM=INSTANT.
An anonymous user with the PROCESS privilege was unable to select processlist table rows.
Find the full list of bug fixes and changes in the MySQL 8.0.29 Release Notes.
Note
Percona Server for MySQL has changed the default for the supported DDL column operations to ALGORITHM=INPLACE. This change fixes the corruption issue with the INSTANT ADD/DROP COLUMNS (find more details in PS-8292.
In MySQL 8.0.29, the default setting for supported DDL operations is ALGORITHM=INSTANT. You can explicitly specify ALGORITHM=INSTANT in DDL column operations.
"},{"location":"release-notes/8.0.29-21.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3982: When the replica node is also an asynchronous slave and, while joining the cluster, this node was not ready to accept connections, a SQL thread failed at the start.
PXC-3118: A fix for when, using a thread pool, a brute force abort for a metadata locking (MDL) subsystem conflict stalled.
PXC-3999: The cluster was stalled on Waiting for acl cache lock
with concurrent user DDL commands.
Debian 9 is no longer supported.
"},{"location":"release-notes/8.0.29-21.html#useful-links","title":"Useful Links","text":"The Percona XtraDB Cluster installation instructions
The Percona XtraDB Cluster downloads
The Percona XtraDB Cluster GitHub location
To contribute to the documentation, review the Documentation Contribution Guide
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.30-22.html","title":"Percona XtraDB Cluster 8.0.30-22.md (2022-12-28)","text":"Release date December 28, 2022 Install instructions Install Percona XtraDB Cluster Download this version Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
For paid support, managed services or consulting services, contact Percona Sales.
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.30-22.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.30-22 (2022-11-21) release notes.
Note
The following Percona Server for MySQL 8.0.30 features are not supported in this version of Percona XtraDB Cluster:
Amazon Key Management Service
Key Management Interoperability Protocol
The features will be supported in the next version of Percona XtraDB Cluster.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.30 and included in Percona Server for MySQL are the following:
Supports Generated Invisible Primary Keys(GIPK). This feature automatically adds a primary key to InnoDB tables without a primary key. The generated key is always named my_row_id
. The GIPK feature is not enabled by default. Enable the feature by setting sql_generate_invisible_primary_key
to ON.
The InnoDB_doublewrite system has two new settings:
DETECT_ONLY
. This setting allows only metadata to be written to the doublewrite buffer. Database page content is not written to the buffer. Recovery does not use the buffer to fix incomplete page writes. Use this setting only when you need to detect incomplete page writes.
DETECT_AND_RECOVER
. This setting is equivalent to the current ON setting. The doublewrite buffer is enabled. Database page content is written to the buffer and the buffer is accessed to fix incomplete page writes during recovery.
The -skip_host_cache
server option is deprecated and will be removed in a future release. Use SET GLOBAL host_cache_size
= 0 or set host_cache_size
= 0.
Find the full list of bug fixes and changes in the MySQL 8.0.30 release notes.
"},{"location":"release-notes/8.0.30-22.html#bug-fixes","title":"Bug fixes","text":"PXC-3639: The buffer overflow was not considered when using strncpy
in WSREP
patch.
PXC-3821: The truncation of the performance_schema
table on a node was replicated across the cluster.
PXC-4012: The replica node left the cluster when executing CREATE USER
with password_history
option simultaneously.
PXC-4033: When the prepared statement is executed in parallel to the DDL modifying the table that the prepared statement uses, the server fails with an assertion saying that the prepared statement transaction was aborted, so it cannot be committed.
PXC-4048: gra_x_y_v2.log
files created in case of failures were empty.
Percona XtraDB Cluster 8.0.30-22 supports Oracle Linux/Red Hat Enterprise Linux 9.
Percona XtraDB Cluster 8.0.30-22 supports Ubuntu 22.04.
The Percona XtraDB Cluster GitHub location
Contribute to the documentation
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.31-23.2.html","title":"Percona XtraDB Cluster 8.0.31-23.2 (2023-04-04)","text":"Release date April 04, 2023 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.31-23.2.html#release-highlights","title":"Release highlights","text":"This release of Percona XtraDB Cluster 8.0.31-23 includes the fix to the security vulnerability CVE-2022-25834 with PXB-2977.
"},{"location":"release-notes/8.0.31-23.2.html#useful-links","title":"Useful links","text":"The Percona XtraDB Cluster GitHub location
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.31-23.2.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.31-23.html","title":"Percona XtraDB Cluster 8.0.31-23 (2023-03-14)","text":"Release date 2024-04-03 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.31-23.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.31-23 (2022-11-21) release notes.
This release adds the following feature in tech preview:
Improvements and bug fixes introduced by Oracle for MySQL 8.0.31 and included in Percona Server for MySQL are the following:
MySQL adds support for the SQL standard INTERSECT
and EXCEPT
table operators.
InnoDB supports parallel index builds. This improves index build performance. The sorted index entries are loaded into a B-tree in a multithread. In previous releases, this action was performed by a single thread.
The Performance and sys schemas show metrics for the global and session memory limits introduced in MySQL 8.0.28.
The following columns have been added to the Performance Schema tables:
Performance Schema tables Columns SETUP_INSTRUMENTS FLAGS THREADS CONTROLLED_MEMORY, MAX_CONTROLLED_MEMORY, TOTAL_MEMORY, MAX_TOTAL_MEMORY EVENTS_STATEMENTS_CURRENT, EVENTS_STATEMENTS_HISTORY, EVENTS_STATEMENTS_HISTORY_LONG MAX_CONTROLLED_MEMORY, MAX_TOTAL_MEMORY Statement Summary Tables MAX_CONTROLLED_MEMORY, MAX_TOTAL_MEMORY Performance Schema Connection Tables MAX_SESSION_CONTROLLED_MEMORY, MAX_SESSION_TOTAL_MEMORY PREPARED_STATEMENTS_INSTANCES MAX_CONTROLLED_MEMORY, MAX_TOTAL_MEMORYThe following columns have been added to the sys schema STATEMENT_ANALYSIS
and X$STATEMENT_ANALYSIS
views:
MAX_CONTROLLED_MEMORY
MAX_TOTAL_MEMORY
The controlled_by_default
flag has been added to the PROPERTIES
column of the SETUP_INSTRUMENTS
table.
Now, you can add and remove non-global memory instruments to the set of controlled-memory instruments. To do this, set the value of the FLAGS
column of SETUP_INSTRUMENTS
.
SQL> UPDATE PERFORMANCE_SCHEMA.SETUP_INTRUMENTS SET FLAGS=\"controlled\" \nWHERE NAME='memory/sql/NET::buff';\n
The audit_log_flush
variable has been deprecated and will be removed in future releases.
Find the full list of bug fixes and changes in the MySQL 8.0.31 Release Notes.
"},{"location":"release-notes/8.0.31-23.html#new-features","title":"New Features","text":"Added support for GCache and Write-Set encryption.
PXC-3574: Added support for the wsrep_mode
variable.
PXC-3989: Added support for keyring components.
PXC-4077: Injecting an empty transaction caused GTID inconsistencies between nodes.
PXC-4120: Enabling wsrep-debug created multiple entries of wsrep_commit_empty()
in the Error log.
PXC-4126: When stream replication and TOI are active, the CREATE USER
statement was not allowed.
PXC-4116: A PXC replica node stalled with parallel asynchronous parallel replication.
PXC-4148: A fix for the MDL conflict db= ticket=10 solved by abort
error.
The Percona XtraDB Cluster GitHub location
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.31-23.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.32-24.2.html","title":"Percona XtraDB Cluster 8.0.32-24.2 (2023-05-24)","text":"Release date May 24, 2023 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.32-24.2.html#release-highlights","title":"Release highlights","text":"This release of Percona XtraDB Cluster 8.0.32-24 includes the fix for PXC-4211.
"},{"location":"release-notes/8.0.32-24.2.html#bug-fixes","title":"Bug fixes","text":"PXC-4211: The server exited on the binary log rotation.
PXC-4217: The cluster can intermittently abort a node on an insert query.
PXC-4222: A node abruptly leaving the cluster causes the applier thread to hang on all the remaining nodes.
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.32-24.2.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.32-24.html","title":"Percona XtraDB Cluster 8.0.32-24 (2023-04-18)","text":"Release date April 18, 2023 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.32-24.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.32-24 (2023-03-20) release notes.
Percona decided to revert the following MySQL bug fix:
The data and the GTIDs backed up by mysqldump were inconsistent when the options --single-transaction
and --set-gtid-purged=ON
were both used. It was because in between the transaction started by mysqldump and the fetching of GTID_EXECUTED, GTIDs on the server could have increased already. With this fixed, a FLUSH TABLES WITH READ LOCK
is performed before the fetching of GTID_EXECUTED
to ensure its value is consistent with the snapshot taken by mysqldump.
The MySQL fix also added a requirement when using \u2013single-transaction and executing FLUSH TABLES WITH READ LOCK for the RELOAD privilege. (MySQL bug #109701, MySQL bug #105761)
The Percona Server version of the mysqldump
utility, in some modes, can be used with MySQL Server. This utility provides a temporary workaround for the \u201cadditional RELOAD privilege\u201d limitation introduced by Oracle MySQL Server 8.0.32.
For more information, see the Percona Performance Blog A Workaround for the \u201cRELOAD/FLUSH_TABLES privilege required\u201d Problem When Using Oracle mysqldump 8.0.32.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.32 and included in Percona Server for MySQL are the following:
A replica can add a Generated Invisible Primary Keys(GIPK) to any InnoDB table. To achieve this behavior, the GENERATE
value is added as a possible value for the CHANGE REPLICATION SOURCE TO
statement\u2019s REQUIRE_TABLE_PRIMARY_KEY_CHECK
option.
The REQUIRE_TABLE_PRIMARY_KEY_CHECK = GENERATE
option can be used on a per-channel basis.
Setting sql_generate_invisible_primary_key
on the source is ignored by a replica because this variable is not replicated. This behavior is inherited from the previous releases.
An upgrade from 8.0.28 caused undetectable problems, such as server exit and corruption.
A fix for after an upgrade, all columns added with ALGORITHM=INSTANT
materialized and have version=0
for any new row inserted. Now, a column added with ALGORITHM=INSTANT
fails if the maximum possible size of a row exceeds the row size limit, so that all new rows with materialized ALGORITHM=INSTANT
columns are within row size limit. (Bug #34558510)
After a drop, adding a specific column using the INSTANT algorithm could cause a data error and a server exit. (Bug #34122122)
An online rebuild DDL no longer crashes after a column is added with ALGORITHM=INSTANT
. Thank you Qingda Hu for reporting this bug. (Bug #33788578, Bug #106279)
PXC-3936: State transfer with disabled SSL in wsrep_provider_options
option crashed the Receiver and Donor nodes.
PXC-3976: The wsrep
status vars
were not updated when 8.0 node joined the 5.7 cluster.
PXC-4137: The WSREP
applier threads failed to modify read-only schemas.
PXC-4162: When doing a rolling upgrade from 5.7 to 8.0, wsrep_cluster_size
was 0.
PXC-4163: The pxc_strict_mode
option did not detect version mismatch.
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.32-24.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.33-25.html","title":"Percona XtraDB Cluster 8.0.33-25 (2023-08-02)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.33-25.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.33-25 (2023-06-15) release notes.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.33 and included in Percona XtraDB Cluster are the following:
The INSTALL COMPONENT
includes the SET
clause. The SET
clause sets the values of component system variables when installing one or several components. This reduces the inconvenience and limitations associated with assigning variable values in other ways.
The mysqlbinlog --start-position
accepts values up to 18446744073709551615
. If the --read-from-remote-server
or --read-from-remote-source
option is used, the maximum is 4294967295
. (Bug #77818, Bug #21498994)
Using a generated column with DEFAULT(col_name)
to specify the default value for a named column is not allowed and throws an error message. (Bug #34463652, Bug #34369580)
Not all possible error states were reported during the binary log recovery process. (Bug #33658850)
User-defined collations are deprecated. The usage of the following user-defined collations causes a warning that is written to the log:
When COLLATE
is followed by the name of a user-defined collation in an SQL statement.
When the name of a user-defined collation is used as the value of collation_server
, collation_database
, or collation_connection
.
The support for user-defined collations will be removed in a future releases of MySQL.
Find the full list of bug fixes and changes in the MySQL 8.0.33 Release Notes.
"},{"location":"release-notes/8.0.33-25.html#new-features","title":"New features","text":"PXC-667: Unexpected exit during the BF-abort of active HANDLER <table> OPEN AS <alias>
.
PXC-679: An undetected state gap discovery causes the server to hang on shutdown.
PXC-4222: When a node abruptly leaves the applier thread causes all the other nodes in the cluster to hang.
PXC-4225: In the INFORMATION_SCHEMA.PROCCESSLIST, the COMMAND value is incorrect.
PXC-4228: The NBO mode corrupted the binary log.
PXC-4233: A cluster state interruption during NBO can lead to a permanent cluster lock.
PXC-4253: The merge to 8.0.33 fixes a number of CVE vulnerabilities.
PXC-4258: A failure to add a foreign key resulted in an inconsistency.
PXC-4268: If the ALTER DEFINER VIEW
was changed with insufficient privileges, the Percona XtraDB Cluster node gets a Disconnected/Inconsistent state
PXC-4278: Renaming a table with NBO caused a server exit.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.33-25.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.33-25.upd.html","title":"Percona XtraDB Cluster 8.0.33-25 Update (2023-08-25)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.33-25.upd.html#known-issues","title":"Known issues","text":"If you use Galera Arbitrator (garbd), we recommend that you do not upgrade to 8.0.33 because garbd-8.0.33
may cause synchronization issues and extensive usage of CPU resources.
If you already upgraded to garbd-8.0.33
, we recommended downgrading to garbd-8.0.32-24-2
by performing the following steps:
Uninstall the percona-xtradb-cluster-garbd_8.0.33-25
package.
Download the percona-xtradb-cluster-garbd_8.0.32-24-2
package from Percona Software Downloads manually.
Install the percona-xtradb-cluster-garbd_8.0.32-24-2
package manually.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now
"},{"location":"release-notes/8.0.33-25.upd.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.34-26.html","title":"Percona XtraDB Cluster 8.0.34-26 (2023-11-01)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.34-26.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.34-26 (2023-09-26) release notes.
Percona XtraDB Cluster implements telemetry that fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the Telemetry on Percona XtraDB Cluster document.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.34 and included in Percona XtraDB Cluster are the following:
Adds mysql_binlog_open()
, mysql_binlog_fetch()
, and mysql_binlog_close()
functions to the libmysqlclient.so shared library. These functions enable developers to access a MySQL server binary log.
For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server is updated from OpenSSL 1.1.1 to OpenSSL 3.0.9.
The mysqlpump
client utility program is deprecated. The use of this program causes a warning. The mysqlpump
client may be removed in future releases. The applications that depend on mysqlpump
will use mysqldump
or MySQL Shell Utilities
.
The sync_relay_log_info
server system variable is deprecated. Using this variable or its equivalent startup --sync-relay-log-info
option causes a warning. This variable may be removed in future releases. The applications that use this variable should be rewritten not to depend on it before the variable is removed.
The binlog_format
server system variable is deprecated and may be removed in future releases. The functionality associated with this variable, which changes the binary logging format, is also deprecated.
When binlog_format
is removed, MySQL server supports only row-based binary logging. Thus, new installations should use only row-based binary logging. Migrate the existing installations that use the statement-based or mixed logging format to the row-based format.
The system variables log_bin_trust_function_creators
and log_statements_unsafe_for_binlog
used in the context of statement-based logging are also deprecated and may be removed in future releases.
Setting or selecting the values of deprecated variables causes a warning.
The mysql_native_password
authentication plugin is deprecated and may be removed in future releases. Using CREATE USER
, ALTER USER
, and SET PASSWORD
operations, insert a deprecation warning into the server error log if an account attempts to authenticate using mysql_native_password
as an authentication method.
The keyring_file
and keyring_encrypted_file
plugins are deprecated. These keyring plugins are replaced with the component_keyring_file
and component_keyring_encrypted_file
components.
Find the full list of bug fixes and changes in the MySQL 8.0.34 Release Notes.
"},{"location":"release-notes/8.0.34-26.html#bug-fixes","title":"Bug fixes","text":"PXC-4219: Starting a Percona XtraBackup process and issuing a START REPLICA
command simultaneously could deadlock the server.
PXC-4238: Running either the asynchronous_connection_failover_add_source
user defined function or the asynchronous_connection_failover_delete_source
user defined function generated an errant transaction, which could prevent a failover in the future.
PXC-4255: Running ALTER USER/SET PASSWORD
and FLUSH PRIVILEGES
simultaneously on different Percona XtraDB Cluster nodes stalled the cluster.
PXC-4284: If a MySQL user was not created before the GRANT option, the Percona XtraDB Cluster node was disconnected and needed a complete state transfer (SST).
PXC-4288: Galera Arbitrator (garbd) used 100% CPU.
PXC-4302: The GRANT statement could be replicated in a wrong way if partial_revokes=1
was enabled.
PXC-4310: A warning message had an incorrect link.
PXC-4296: The garbd 8.0.33 reported a wrong version.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.34-26.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.35-27.html","title":"Percona XtraDB Cluster 8.0.35-27 (2024-01-17)","text":"Get started with Quickstart Guide for Percona XtraDB Cluster.
Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.35-27.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.35-27 (2023-12-27) release notes.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.35 and included in Percona XtraDB Cluster are the following:
A future release may remove deprecated variables and options. The usage of these deprecated items may cause a warning. We recommend migrating from deprecated variables and options as soon as possible.
This release deprecates the following variables and options:
The binlog_transaction_dependency_tracking
server system variable
The old
and new
server system variables
The --character-set-client-handshake
server variable
INFORMATION_SCHEMA.PROCESSLIST
The implementation of the SHOW PROCESSLIST
command that uses the INFORMATION_SCHEMA.PROCESSLIST
table
The performance_schema_show_processlist
variable
Find the full list of bug fixes and changes in the MySQL 8.0.35 Release Notes.
"},{"location":"release-notes/8.0.35-27.html#bug-fixes","title":"Bug fixes","text":"PXC-4343: The table spaces were corrupted during SST that caused the Xtrabackup failure with the Header page contains inconsistent data in datafile
error (Thanks to Andrew Garner for his help in fixing this issue.)
PXC-4336: The Percona XtraDB Cluster node disconnected from the cluster due to CHECK CONSTRAINT.
PXC-4332: The Percona XtraDB Cluster node disconnected from the cluster if the local variable was changed at the session level.
PXC-4318: The Percona XtraDB Cluster node can serve as an async replica for another master node. However, when the same row was modified on both the Percona XtraDB Cluster node and the master node, the Percona XtraDB Cluster node got stuck due to replication conflicts.
PXC-4317: On newer platforms like AlmaLinux, adding a new node to an existing cluster was unsuccessful because the readlink command used during the SST process on joiner failed (Thanks to Mikael Gbai for reporting this issue.)
PXC-4315: The logs like MDL conflict ... solved by abort
were printed, but no transaction was aborted (Thanks to Arkadiusz Petruczynik for reporting this issue.)
PXC-4312: When DROP EVENT IF EXISTS was executed for non existing event, the event was binlogged with the GTID containing UUID of local server instead of global cluster-wide UUID.
PXC-4298: The node was disconnected when using ALTER TABLE
, including ADD UNIQUE
in the table containing duplicate entries (Thanks to Vit Novak for reporting this issue.)
PXC-4237: wsrep_sst_xtrabackup-v2
failed when adding a new node.
PXC-4179: The wsrep applier threads and rollbacker threads were not reported by performance_schema.processlist
.
PXC-4034: The usage of sql_log_bin=0
broke GTID consistency.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.35-27.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.36-28.html","title":"Percona XtraDB Cluster 8.0.36-28 (2024-04-03)","text":"Get started with Quickstart Guide for Percona XtraDB Cluster.
Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.36-28.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.36-28 (2024-03-04) release notes.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.36 and included in Percona XtraDB Cluster are the following:
The hashing algorithm employed yielded poor performance when using a HASH field to check for uniqueness. (Bug #109548, Bug #34959356)
All statement instrument elements that begin with statement/sp/%
, except statement/sp/stmt
, are disabled by default.
Find the complete list of bug fixes and changes in the MySQL 8.0.36 Release Notes.
"},{"location":"release-notes/8.0.36-28.html#bug-fixes","title":"Bug fixes","text":"PXC-4316: If the node shut down while being partitioned from the cluster, started again, and then rejoined the cluster, the other part of the cluster would still wait for the partitioned node.
PXC-4341: When running FLUSH TABLES
after a statement was prepared, the node could exit due to broken consistency.
PXC-4348: The joiner node exited with Metadata Lock BF-BF
conflict during IST
.
PXC-4362: The node could leave the cluster when binary logging was enabled and the function was created without super privilege.
PXC-4365: The node could leave the cluster when the row size was too large and had more than three nvarchar
columns.
PXC-4340: The server exited when executing the complicated query with 9 CTEs.
PXC-4367: The InnoDB semaphore wait timeout caused a server exit under a heavy load.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.36-28.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html","title":"Percona XtraDB Cluster 8.0.18-9.3","text":"Date
April 29, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.18-9.3 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.18-9 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#improvements","title":"Improvements","text":"PXC-2495: Modified documentation for wsrep_sst_donor to include results when IP address is used
PXC-3002: Enhanced service_startup_timeout options to allow it to be disabled
PXC-2331: Modified the SST process to run mysql_upgrade
PXC-2991: Enhanced Strict Mode Processing to handle Group Replication Plugin
PXC-2985: Enabled Service for Automated Startup on Reboot with valid grastate.dat
PXC-2980: Modified Documentation to include AutoStart Up Process after Installation
PXC-2722: Enabled Support for Percona XtraBackup (PXB) 8.0.8 in Percona XtraDB Cluster (PXC) 8.0
PXC-2602: Added Ability to Configure xbstream options with wsrep_sst_xtrabackup
PXC-2455: Implemented the use of Percona XtraBackup (PXB) 8.0.5 in Percona XtraDB Cluster (PXC) 8.0
PXC-2259: Updated wsrep-files-index.htrml to include new files created by Percona XtraDB Cluster (PXC)
PXC-2197: Modified SST Documentation to Include Package Dependencies for Percona XtraBackup (PXB)
PXC-2194: Improvements to the PXC upgrade guide
PXC-2191: Revised Documentation on innodb_deadlock to Clarify Cluster Level Deadlock Processing
PXC-3017: Remove these SST encryption methods. encrypt=1, encrypt=2, and encrypt=3
PXC-2189: Modified Reference Architecture for Percona XtraDB Cluster (PXC) to include ProxySQL
PXC-2537: Modified mysqladmin password command to prevent node crash
PXC-2958: Modified User Documentation to include wsrep_certification_rules and cert.optimistic_pa
PXC-2045: Removed debian.cnf reference from logrotate/logcheck configuration Installed on Xenial/Stretch
PXC-2292: Modified Processing to determine Type of Key Cert when IST/SST
PXC-2974: Modified Percona XtraDB Cluster (PXC) Dockerfile to Integrate Galera wsrep recovery Process
PXC-3145: When the joiner fails during an SST, the mysqld process stays around (doesn\u2019t exit)
PXC-3128: Removed Prior Commit to Allow High Priority High Transaction Processing
PXC-3076: Modified Galera build to remove python3 components
PXC-2912: Modified netcat Configuration to Include -N Flag on Donor
PXC-2476: Modified process to determine and process IST or SST and with keyring_file processing
PXC-2204: Modified Shutdown using systemd after Bootstrap to provide additional messaging
PXB-2142: Transition key was written to backup / stream
PXC-2969: Modified pxc_maint_transition_period Documentation to Include Criteria for Use
PXC-2978: Certificate Information not Displayed when pxc-encrypt-cluster-traffic=ON
PXC-3039: No useful error messages if an SSL-disabled node tries to join SSL-enabled cluster
PXC-3043: Update required donor version to PXC 5.7.28
PXC-3063: Data at Rest Encryption not Encrypting Record Set Cache
PXC-3092: Abort startup if keyring is specified but cluster traffic encryption is turned off
PXC-3093: Garbd logs Completed SST Transfer Incorrectly (Timing is not correct)
PXC-3159: Killing the Donor or Connection lost during SST Process Leaves Joiner Hanging
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html","title":"Percona XtraDB Cluster 8.0.19-10","text":"Date
June 18, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.19-10 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.19-10 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#improvements","title":"Improvements","text":"PXC-2189: Modify Reference Architecture for Percona XtraDB Cluster (PXC) to include ProxySQL
PXC-3182: Modify processing to not allow writes on 8.0 nodes while 5.7 nodes are still on the cluster
PXC-3187: Add dependency package installation note in PXC binary tarball installation doc.
PXC-3138: Document mixed cluster write (PXC8 while PXC5.7 nodes are still part of the cluster) should not be completed.
PXC-3066: Document that pxc-encrypt-cluster-traffic=OFF is not just about traffic encryption
PXC-2993: Document the dangers of running with strict mode disabled and Group Replication at the same time
PXC-2980: Modify Documentation to include AutoStart Up Process after Installation
PXC-2604: Modify garbd processing to support Operator
PXC-3298: Correct galera_var_reject_queries test to remove display value width
PXC-3320: Correction on PXC installation doc
PXC-3270: Modify wsrep_ignore_apply_errors variable default to restore 5.x behavior
PXC-3179: Correct replication of CREATE USER \u2026 RANDOM PASSWORD
PXC-3080: Modify to process the ROTATE_LOG_EVENT synchronously to perform proper cleanup
PXC-2935: Remove incorrect assertion when \u2013thread_handling=pool-of-threads is used
PXC-2500: Modify ALTER USER processing when executing thread is Galera applier thread to correct assertion
PXC-3234: Correct documentation link in spec file
PXC-3204: Modify to set wsrep_protocol_version correctly when wsrep_auto_increment_control is disabled
PXC-3189: Correct SST processing for super_read_only
PXC-3184: Modify startup to correct crash when socat not found and SST Fails
PXC-3169: Modify wsrep_reject_queries to enhance error messaging
PXC-3165: Allow COM_FIELD_LIST to be executed when WSREP is not ready
PXC-3145: Modify to end mysqld process when the joiner fails during an SST
PXC-3043: Update required donor version to PXC 5.7.28 (previously was Known Issue)
PXC-3036: Document correct method for starting, stopping, bootstrapping
PXC-3287: Correct link displayed on help client command
PXC-3031: Modify processing for garbd to prevent issues when multiple requests are started at approximately the same time and request an SST transfers to prevent SST from hanging
PXC-3039: No useful error messages if an SSL-disabled node tries to join SSL-enabled cluster
PXC-3092: Abort startup if keyring is specified but cluster traffic encryption is turned off
PXC-3093: Garbd logs Completed SST Transfer Incorrectly (Timing is not correct)
PXC-3159: Killing the Donor or Connection lost during SST Process Leaves Joiner Hanging
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html","title":"Percona XtraDB Cluster 8.0.20-11.2","text":"Date
October 9, 2020
Installation
Installing Percona XtraDB Cluster
This release fixes the security vulnerability CVE-2020-15180
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html","title":"Percona XtraDB Cluster 8.0.20-11.3","text":"Date
October 22, 2020
Installation
Installing Percona XtraDB Cluster
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html","title":"Percona XtraDB Cluster 8.0.20-11","text":"Date
October 1, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.20-11 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.20-11 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#improvements","title":"Improvements","text":"PXC-3159: Modify error handling to close the communication channels and abort the joiner node when donor crashes (previously was Known Issue)
PXC-3352: Modify wsrep_row_upd_check_foreign_constraints() to remove the check for DELETE
PXC-3371: Fix Directory creation in build-binary.sh
PXC-3370: Provide binary tarball with shared libs and glibc suffix & minimal tarballs
PXC-3360: Update sysbench commands in PXC-ProxySQL configuration doc page
PXC-3312: Prevent cleanup of statement diagnostic area in case of transaction replay.
PXC-3167: Correct GCache buffer repossession processing
PXC-3347: Modify PERCONA_SERVER_EXTENSION for bintarball and modify MYSQL_SERVER_SUFFIX
PXC-3039: No useful error messages if an SSL-disabled node tries to join SSL-enabled cluster
PXC-3092: Log warning at startup if keyring is specified but cluster traffic encryption is turned off
PXC-3093: Garbd logs Completed SST Transfer Incorrectly (Timing is not correct)
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html","title":"Percona XtraDB Cluster 8.0.21-12.1","text":"Date
December 28, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.21-12.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.21-12 for more details on these changes.
Implement an inconsistency voting policy. In the best case scenario, the node with the inconsistent data is aborted and the cluster continues to operate.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#improvements","title":"Improvements","text":"PXC-3353: Modify error handling in Garbd when donor crashes during SST or when an invalid donor name is passed to it
PXC-3468: Resolve package conflict when installing PXC 5.7 on RHEL/CentOS8
PXC-3418: Prevent DDL-DML deadlock by making in-place ALTER take shared MDL for the whole duration.
PXC-3416: Fix memory leaks in garbd when started with invalid group name
PXC-3445: Correct MTR test failures
PXC-3442: Fix crash when log_slave_updates=ON and consistency check statement is executed
PXC-3424: Fix error handling when the donor is not able to serve SST
PXC-3404: Fix memory leak in garbd while processing CC actions
PXC-3191: Modify Read-Only checks on wsrep_* tables when in super_read_only
PXC-3039: No useful error messages if an SSL-disabled node tries to join an SSL-enabled cluster
PXC-3092: Log a warning at startup if a keyring is specified but cluster traffic encryption is turned off
PXC-3093: Completed SST Transfer incorrectly logged by garbd (Timing is not correct)
PXC-3159: Modify the error handling to close the communication channels and abort the joiner node when the donor crashes
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html","title":"Percona XtraDB Cluster 8.0.22-13.1","text":"Date
March 22, 2021
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.22-13.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.22-13 for more details on these changes.
This release fixes security vulnerability CVE-2021-27928, a similar issue to CVE-2020-15180
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#improvements","title":"Improvements","text":"PXC-3575: Implement package changes for SELinux and AppArmor
PXC-3115: Create Default SELinux and AppArmor policy
PXC-3536: Modify processing to not allow threads/queries to be killed if the thread is in TOI
PXC-3565: Correct Performance of SELECT in PXC
PXC-3502: Correct condition in thd_binlog_format() function for List Index process (Thanks to user Pawe\u0142 Bromboszcz for reporting this issue)
PXC-3501: Modify wsrep_row_upd_check_foreign_constraints() to include foreign key dependencies in the writesets for DELETE query (Thanks to user Steven Gales for reporting this issue)
PXC-2913: Correct MDL locks assertion when wsrep provider is unloaded
PXC-3475: Adjust mysqld_safe script to parse 8.0 log style properly
PXC-3039: No useful error messages if an SSL-disabled node tries to join an SSL-enabled cluster
PXC-3092: Log a warning at startup if a keyring is specified, but cluster traffic encryption is turned off
PXC-3093: Completed SST Transfer incorrectly logged by garbd (Timing is not correct)
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html","title":"Percona XtraDB Cluster 8.0.23-14.1","text":"Date
June 9, 2021
Installation
Installing Percona XtraDB Cluster.
Percona XtraDB Cluster 8.0.23-14.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.23-14 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#improvements","title":"Improvements","text":"PXC-3464: Data is not propagated with SET SESSION sql_log_bin = 0
PXC-3146: Galera/SST is not looking for the default data directory location for SSL certs
PXC-3226: Results from CHECK TABLE from PXC server can cause the client libraries to crash
PXC-3381: Modify GTID functions to use a different char set
PXC-3437: Node fails to join in the endless loop
PXC-3446: Memory leak during server shutdown
PXC-3538: Garbd crashes after successful backup
PXC-3580: Aggressive network outages on one node makes the whole cluster unusable
PXC-3596: Node stuck in aborting SST
PXC-3645: Deadlock during ongoing transaction and RSU
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html","title":"Percona XtraDB Cluster 8.0.25-15.1","text":"Date
November 22, 2021
Installation
Installing Percona XtraDB Cluster.
Percona XtraDB Cluster 8.0.25-15.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.25-15 for more details on these changes.
Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#release-highlights","title":"Release Highlights","text":"A Non-Blocking Operation method for online schema changes in Percona XtraDB Cluster. This mode is similar to the Total Order Isolation (TOI) mode, whereas a data definition language (DDL) statement (for example, ALTER
) is executed on all nodes in sync. The difference is that in the NBO mode, the DDL statement acquires a metadata lock that locks the table or schema at a late stage of the operation, which is a more efficient locking strategy.
Note that the NBO mode is a Tech Preview feature. We do not recommend that you use this mode in a production environment. For more information, see Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU).
The notable changes and bug fixes introduced by Oracle MySQL include the following:
The sql_slave_skip_counter
variable only counts the events in the uncompressed transaction payloads.
A possible deadlock occurred when system variables, read by different clients, were being updated and the binary log file was rotated.
Sometimes the aggregate function results could return values from a previous statement when using a prepared SELECT
statement with a WHERE
clause that is always false.
For more information, see the MySQL 8.0.24 Release Notes and the MySQL 8.0.25 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#new-features","title":"New Features","text":"PXC-3275: Fix the documented APT package list to match the packages listed in the Repo. (Thanks to user Hubertus Krogmann for reporting this issue)
PXC-3387: Performing an intermediate commit does not call wsrep commit hooks.
PXC-3449: Fix for missing dependencies which were carried out in replication writesets caused Galera to fail.
PXC-3589: Documentation: Updates in Percona XtraDB Cluster Limitations that the LOCK=NONE
clause is no longer allowed in an INPLACE ALTER TABLE statement. (Thanks to user Brendan Byrd for reporting this issue)
PXC-3611: Fix that deletes any keyring.backup file if it exists for SST operation.
PXC-3608: Fix a concurrency issue that caused a server exit when attempting to read a foreign key.
PXC-3637: Changes the service start sequence to allow more time for mounting local or remote directories with large amounts of data. (Thanks to user Eric Gonyea for reporting this issue)
PXC-3679: Fix for SST failures after the update of socat to \u20181.7.4.0\u2019.
PXC-3706: Fix adds a wait to wsrep_after_commit
until the first thread in a group commit queue is available.
PXC-3729: Fix for conflicts when multiple applier threads execute certified transactions and are in High-Priority transaction mode.
PXC-3731: Fix for incorrect writes to the binary log when sql_log_bin=0
.
PXC-3733: Fix to clean the WSREP transaction state if a transaction is requested to be re-prepared.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html","title":"Percona XtraDB Cluster 8.0.26-16.1","text":"Date
January 17, 2022
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#release-highlights","title":"Release Highlights","text":"The following are a number of the notable fixes for MySQL 8.0.26, provided by Oracle, and included in this release:
The TLSv1 and TLSv1.1 connection protocols are deprecated.
Identifiers with specific terms, such as \u201cmaster\u201d or \u201cslave\u201d are deprecated and replaced. See the Functionality Added or Changed section in the 8.0.26 Release Notes for a list of updated identifiers. The following terms have been changed:
The identifier master
is changed to source
The identifier slave
is changed to replica
The identifier multithreaded slave
(mts
) is changed to multithreaded applier
(mta
)
When using semisynchronous replication, either the old version or the new version of system variables and status variables are available. You cannot have both versions installed on an instance. The old system variables are available when you use the old version, but the new ones are not. The new system variables are available when you use the new version, but the old values are not.
In an upgrade from an earlier version to 8.0.26, enable the rpl_semi_sync_source
plugin and the rpl_semi_sync_replica
plugin after the upgrade has been completed. Enabling these plugins before all of the nodes are upgraded may cause data inconsistency between the nodes.
For the source, the rpl_semi_sync_master
plugin (seminsync_master.so
library) is the old version and the rpl_semi_sync_source
plugin(semisync_source.so
library) is the new version.
For the client, the rpl_semi_sync_slave
plugin (semisync_slave.so
library) is the old version and the rpl_semi_sync_replica
plugin (semisync_replica.so
library) is the new version
For more information, see the MySQL 8.0.26 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3824: An incorrect directive in Systemd Unit File (Thanks to Jim Lohiser for reporting this issue)
PXC-3706: A fix for a race condition in group commit queue (Thanks to Kevin Sauter for reporting this issue)
PXC-3739: The FLUSH TABLES FOR EXPORT
lock is released when the session ends.
PXC-3628: The server allowed altering the storage engine to MyISAM
for mysql.wsrep_* tables.
PXC-3731: A fix for when the user deletes data from the source but does not want that data deleted from the replica. The sql_log_bin=0
command had no effect and the deleted rows were replicated and written into the binary log.
PXC-3857: The following system variables are renamed. The old variables are deprecated and may be removed in a future version.
wsrep_slave_threads
renamed as wsrep_applier_threads
wsrep_slave_FK_checks
renamed as wsrep_applier_FK_checks
wsrep_slave_UK_checks
renamed as wsrep_applier_UK_checks
wsrep_restart_slave
renamed as wsrep_restart_replica
PXC-3039: No useful error messages if an SSL-disabled node tried to join an SSL-enabled cluster
PXC-3093: A completed SST Transfer is incorrectly logged by garbd. The timing is incorrect.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html","title":"Percona XtraDB Cluster 8.0.27-18.1","text":"Date: April 11, 2022
Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#release-highlights","title":"Release Highlights","text":"The following lists a number of the bug fixes for MySQL 8.0.27, provided by Oracle, and included in Percona Server for MySQL:
The default_authentication_plugin
is deprecated. Support for this plugin may be removed in future versions. Use the authentication_policy
variable.
The binary
operator is deprecated. Support for this operator may be removed in future versions. Use CAST(... AS BINARY)
.
Fix for when a parent table initiates a cascading SET NULL
operation on the child table, the virtual column can be set to NULL instead of the value derived from the parent table.
Find the full list of bug fixes and changes in the MySQL 8.0.27 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3831: Allowed certified high-priority transactions to proceed without lock conflicts.
PXC-3766: Stopped every XtraBackup-based SST operation from executing the version-check procedure.
PXC-3704: Based the maximum writeset size on repl.max_ws_size
when both repl.max_ws_size
and wsrep_max_ws_size
values are passed during startup.
The Percona XtraDB Cluster installation instructions
The Percona XtraDB Cluster downloads
The Percona XtraDB Cluster GitHub location
To contribute to the documentation, review the Documentation Contribution Guide
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html","title":"Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#release-highlights","title":"Release Highlights","text":"Improvements and bug fixes introduced by Oracle for MySQL 8.0.28 and included in Percona Server for MySQL are the following:
The ASCII
shortcut for CHARACTER SET latin1
and UNICODE
shortcut for CHARACTER SET ucs2
are deprecated and raise a warning to use CHARACTER SET
instead. The shortcuts will be removed in a future version.
A stored function and a loadable function with the same name can share the same namespace. Add the schema name when invoking a stored function in the shared namespace. The server generates a warning when function names collide.
InnoDB supports ALTER TABLE ... RENAME COLUMN
operations when using ALGORITHM=INSTANT
.
The limit for innodb_open_files
now includes temporary tablespace files. The temporary tablespace files were not counted in the innodb_open_files
in previous versions.
Find the full list of bug fixes and changes in the MySQL 8.0.28 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3923: When the read_only
or super_read_only
option was set, the ANALYZE TABLE
command removed the node from the cluster.
PXC-3388: Percona XtraDB Cluster stuck in a DESYNCED state after joiner was killed.
PXC-3609: The binary log status variables were updated when the binary log was disabled. Now the status variables are not registered when the binary log is disabled. (Thanks to Stofa Kenida for reporting this issue.)
PXC-3848: The cluster node exited when the CURRENT_USER()
function was used. (Thanks to Steffen B\u00f6hme for reporting this issue.)
PXC-3872: A user without system_user privilege was able to drop system users. (Thanks to user jackc for reporting this issue.)
PXC-3918: Galera Arbitrator (garbd) could not connect if the Percona XtraDB Cluster server used encrypted connections. The issue persisted even when the proper certificates were specified.
PXC-3924: Using TRUNCATE TABLE X
and INSERT INTO X
options when the foreign keys were disabled and violated caused the HA_ERR_FOUND_DUPP_KEY
error on a slave node. (Thanks to Daniel Barton\u00ed\u010dek for reporting this issue.)
PXC-3062: The wsrep_incoming_addresses
status variable did not contain the garbd IP address.
The Percona XtraDB Cluster installation instructions
The Percona XtraDB Cluster downloads
The Percona XtraDB Cluster GitHub location
To contribute to the documentation, review the Documentation Contribution Guide
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/release-notes_index.html","title":"Percona XtraDB Cluster 8.0 release notes index","text":"Percona XtraDB Cluster 8.0.36-28 (2024-04-03)
Percona XtraDB Cluster 8.0.35-27 (2024-01-17)
Percona XtraDB Cluster 8.0.34-26 (2023-11-01)
Percona XtraDB Cluster 8.0.33-25 Update (2023-08-25)
Percona XtraDB Cluster 8.0.33-25 (2023-08-02)
Percona XtraDB Cluster 8.0.32-24.2 (2023-05-24)
Percona XtraDB Cluster 8.0.32-24 (2023-04-18)
Percona XtraDB Cluster 8.0.31-23.2 (2023-04-04)
Percona XtraDB Cluster 8.0.31-23 (2023-03-14)
Percona XtraDB Cluster 8.0.30-22 (2022-12-28)
Percona XtraDB Cluster 8.0.29-21 (2022-09-12)
Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)
Percona XtraDB Cluster 8.0.27-18.1 (2022-04-11)
Percona XtraDB Cluster 8.0.26-16.1 (2022-01-17)
Percona XtraDB Cluster 8.0.25-15.1 (2021-11-22)
Percona XtraDB Cluster 8.0.23-14.1 (2021-06-09)
Percona XtraDB Cluster 8.0.22-13.1 (2021-03-22)
Percona XtraDB Cluster 8.0.21-12.1 (2020-12-28)
Percona XtraDB Cluster 8.0.20-11.3 (2020-10-22)
Percona XtraDB Cluster 8.0.20-11.2 (2020-10-09)
Percona XtraDB Cluster 8.0.20-11 (2020-10-01)
Percona XtraDB Cluster 8.0.19-10 (2020-06-18)
Percona XtraDB Cluster 8.0.18-9.3 (2020-04-29)
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Percona XtraDB Cluster 8.0 Documentation","text":"This documentation is for the latest release: Percona XtraDB Cluster 8.0.36-28 (Release Notes).
Percona XtraDB Cluster is a database clustering solution for MySQL. It ensures high availability, prevents downtime and data loss, and provides linear scalability for a growing environment.
"},{"location":"index.html#features-of-percona-xtradb-cluster","title":"Features of Percona XtraDB Cluster","text":"Feature Details Synchronous replication Data is written to all nodes simultaneously, or not written at all in case of a failure even on a single node Multi-source replication Any node can trigger a data update. True parallel replication Multiple threads on replica performing replication on row level Automatic node provisioning You simply add a node and it automatically syncs. Data consistency No more unsynchronized nodes. PXC Strict Mode Avoids the use of tech preview features and unsupported features Configuration script for ProxySQL Percona XtraDB Cluster includes theproxysql-admin
tool that automatically configures Percona XtraDB Cluster nodes using ProxySQL. Automatic configuration of SSL encryption Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic
variable that enables automatic configuration of SSL encryption Optimized Performance Percona XtraDB Cluster performance is optimized to scale with a growing production workload Percona XtraDB Cluster 8.0 is fully compatible with MySQL Server Community Edition 8.0 and Percona Server for MySQL 8.0. The cluster has the following compatibilities:
Data - use the data created by any MySQL variant.
Application - no changes or minimal application changes are required for an application to use the cluster.
See also
Overview of changes in the most recent PXC release
Important changes in Percona XtraDB Cluster 8.0
MySQL Community Edition
Percona Server for MySQL
How We Made Percona XtraDB Cluster Scale
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"add-node.html","title":"Add nodes to cluster","text":"New nodes that are properly configured are provisioned automatically. When you start a node with the address of at least one other running node in the wsrep_cluster_address
variable, this node automatically joins and synchronizes with the cluster.
Note
Any existing data and configuration will be overwritten to match the data and configuration of the DONOR node. Do not join several nodes at the same time to avoid overhead due to large amounts of traffic when a new node joins.
Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer and the wsrep_sst_method
variable is always set to xtrabackup-v2
.
Start the second node using the following command:
[root@pxc2 ~]# systemctl start mysql\n
After the server starts, it receives SST automatically.
To check the status of the second node, run the following:
mysql@pxc2> show status like 'wsrep%';\n
Expected output +----------------------------------+--------------------------------------------------+\n| Variable_name | Value |\n+----------------------------------+--------------------------------------------------+\n| wsrep_local_state_uuid | a08247c1-5807-11ea-b285-e3a50c8efb41 |\n| ... | ... |\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n| ... | |\n| wsrep_cluster_size | 2 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n| ... | ... |\n| wsrep_provider_capabilities | :MULTI_MASTER:CERTIFICATION: ... |\n| wsrep_provider_name | Galera |\n| wsrep_provider_vendor | Codership Oy <info@codership.com> |\n| wsrep_provider_version | 4.3(r752664d) |\n| wsrep_ready | ON |\n| ... | ... | \n+----------------------------------+--------------------------------------------------+\n75 rows in set (0.00 sec)\n
The output of SHOW STATUS
shows that the new node has been successfully added to the cluster. The cluster size is now 2 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.
If the state of the second node is Synced
as in the previous example, then the node received full SST is synchronized with the cluster, and you can proceed to add the next node.
Note
If the state of the node is Joiner
, it means that SST hasn\u2019t finished. Do not add new nodes until all others are in Synced
state.
To add the third node, start it as usual:
[root@pxc3 ~]# systemctl start mysql\n
To check the status of the third node, run the following:
mysql@pxc3> show status like 'wsrep%';\n
The output shows that the new node has been successfully added to the cluster. Cluster size is now 3 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ... | ... |\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n| ... | ... |\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n| ... | ... |\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
"},{"location":"add-node.html#next-steps","title":"Next steps","text":"When you add all nodes to the cluster, you can verify replication by running queries and manipulating data on nodes to see if these changes are synchronized across the cluster.
"},{"location":"add-node.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"apparmor.html","title":"Enable AppArmor","text":"Percona XtraDB Cluster contains several AppArmor profiles. Multiple profiles allow for easier maintenance because the mysqld
profile is decoupled from the SST script profile. This separation allows the introduction of other SST methods or scripts with their own profiles.
The following profiles are available:
An extended version of the Percona Server profile which allows the execution of SST script.
An xtrabackup-v2 SST script profile located in /etc/apparmor.d/usr.bin.wsrep_sst_xtrabackup-v2
The mysqld
profile allows the execution of the SST script in PUx mode with the /{usr/}bin/wsrep_sst_*PUx command. The profile is applied if the script contains a profile. The SST script runs in unconfined mode if the script does not contain a profile. The system administrator can change the execution mode to Pix. This action causes a fallback to inherited mode in case the SST script profile is absent.
The mysqld
profile and the SST
script profile can be adjusted, such as moving the data directory, in the same way as modifying the mysqld profile in Percona Server.
pxc_encrypt_cluster_traffic
","text":"By default, the pxc_encrypt_cluster_traffic
is ON
, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.
Set up the certificates describes the certificate setup.
The following AppArmor profile rule grants access to certificates located in /etc/mysql/certs. You must be root or have sudo
privileges.
# Allow config access\n /etc/mysql/** r,\n
This rule is present in both profiles (usr.sbin.mysqld and usr.bin.wsrep_sst_xtrabackup-v2). The rule allows the administrator to store the certificates anywhere inside of the /etc/mysql/ directory. If the certificates are located outside of the specified directory, you must add an additional rule which allows access to the certificates in both profiles. The rule must have the path to the certificates location, like the following:
# Allow config access\n /path/to/certificates/* r,\n
The server certificates must be accessible to the mysql user and are readable only by this user.
"},{"location":"apparmor.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"apt.html","title":"Install Percona XtraDB Cluster on Debian or Ubuntu","text":"Specific information on the supported platforms, products, and versions is described in Percona Software and Platform Lifecycle.
The packages are available in the official Percona software repository and on the download page. It is recommended to install Percona XtraDB Cluster from the official repository using APT.
We gather Telemetry data in the Percona packages and Docker images.
"},{"location":"apt.html#prerequisites","title":"Prerequisites","text":"See also
For more information, see Enabling AppArmor.
"},{"location":"apt.html#install-from-repository","title":"Install from Repository","text":"Update the sytem:
sudo apt update\n
Install the necessary packages:
sudo apt install -y wget gnupg2 lsb-release curl\n
Download the repository package
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb\n
Install the package with dpkg
:
sudo dpkg -i percona-release_latest.generic_all.deb\n
Refresh the local cache to update the package information:
sudo apt update\n
Enable the release
repository for Percona XtraDB Cluster:
sudo percona-release setup pxc80\n
Install the cluster:
sudo apt install -y percona-xtradb-cluster\n
During the installation, you are requested to provide a password for the root
user on the database node.
Note
If needed, you could also install the percona-xtradb-cluster-full
meta-package, which includes the following additional packages:
libperconaserverclient21
libperconaserverclient21-dev
percona-xtradb-cluster
percona-xtradb-cluster-client
percona-xtradb-cluster-common
percona-xtradb-cluster-dbg
percona-xtradb-cluster-full
percona-xtradb-cluster-garbd
percona-xtradb-cluster-garbd-debug
percona-xtradb-cluster-server
percona-xtradb-cluster-server-debug
percona-xtradb-cluster-source
percona-xtradb-cluster-test
After you install Percona XtraDB Cluster and stop the mysql
service, configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"bootstrap.html","title":"Bootstrap the first node","text":"After you configure all PXC nodes, initialize the cluster by bootstrapping the first node. The initial node must contain all the data that you want to be replicated to other nodes.
Bootstrapping implies starting the first node without any known cluster addresses: if the wsrep_cluster_address
variable is empty, Percona XtraDB Cluster assumes that this is the first node and initializes the cluster.
Instead of changing the configuration, start the first node using the following command:
[root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
When you start the node using the previous command, it runs in bootstrap mode with wsrep_cluster_address=gcomm://
. This tells the node to initialize the cluster with wsrep_cluster_conf_id
variable set to 1
. After you add other nodes to the cluster, you can then restart this node as normal, and it will use standard configuration again.
Note
A service started with mysql@bootstrap
must be stopped using the same command. For example, the systemctl stop mysql
command does not stop an instance started with the mysql@bootstrap
command.
To make sure that the cluster has been initialized, run the following:
mysql@pxc1> show status like 'wsrep%';\n
The output shows that the cluster size is 1 node, it is the primary component, the node is in the Synced
state, it is fully connected and ready for write-set replication.
+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ... | ... |\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n| ... | ... |\n| wsrep_cluster_size | 1 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n| ... | ... |\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
"},{"location":"bootstrap.html#next-steps","title":"Next steps","text":"After initializing the cluster, you can add other nodes.
"},{"location":"bootstrap.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"certification.html","title":"Certification in Percona XtraDB Cluster","text":"Percona XtraDB Cluster replicates actions executed on one node to all other nodes in the cluster, and makes it fast enough to appear as if it is synchronous (virtually synchronous).
The following types of actions exist:
DDL actions are executed using Total Order Isolation (TOI). We can ignore Rolling Schema Upgrades (ROI).
DML actions are executed using normal Galera replication protocol.
Note
This manual page assumes the reader is aware of TOI and MySQL replication protocol.
DML (INSERT
, UPDATE
, and DELETE
) operations effectively change the state of the database, and all such operations are recorded in XtraDB by registering a unique object identifier (key) for each change (an update or a new addition).
append_key
operation. An append_key
operation registers the key of the data object that has undergone change by the transaction. The key for rows can be represented in three parts as db_name
, table_name
, and pk_columns_for_table
(if pk
is absent, a hash of the complete row is calculated).This ensures that there is quick and short meta information about the rows that this transaction has touched or modified. This information is passed on as part of the write-set for certification to all the nodes in the cluster while the transaction is in the commit phase.
For a transaction to commit, it has to pass XtraDB/Galera certification, ensuring that transactions don\u2019t conflict with any other changes posted on the cluster group/channel. Certification will add the keys modified by a given transaction to its own central certification vector (CCV), represented by cert_index_ng
. If the said key is already part of the vector, then conflict resolution checks are triggered.
Conflict resolution traces the reference transaction (that last modified this item in the cluster group). If this reference transaction is from some other node, that suggests the same data was modified by the other node, and changes of that node have been certified by the local node that is executing the check. In such cases, the transaction that arrived later fails to certify.
Changes made to database objects are bin-logged. This is similar to how MySQL does it for replication with its Source-Replica ecosystem, except that a packet of changes from a given transaction is created and named as a write-set.
Once the client/user issues a COMMIT
, Percona XtraDB Cluster will run a commit hook. Commit hooks ensure the following:
Flush the binary logs.
Check if the transaction needs replication (not needed for read-only transactions like SELECT
).
If a transaction needs replication, then it invokes a pre-commit hook in the Galera ecosystem. During this pre-commit hook, a write-set is written in the group channel by a replicate operation. All nodes (including the one that executed the transaction) subscribe to this group-channel and read the write-set.
gcs_recv_thread
is the first to receive the packet, which is then processed through different action handlers.
Each packet read from the group-channel is assigned an id
, which is a locally maintained counter by each node in sync with the group. When any new node joins the group/cluster, a seed-id for it is initialized to the current active id from group/cluster.
There is an inherent assumption/protocol enforcement that all nodes read the packet from a channel in the same order, and that way even though each packet doesn\u2019t carry id
information, it is inherently established using the locally maintained id
value.
The following example shows what happens in a common situation. act_id
is incremented and assigned only for totally ordered actions, and only in primary state (skip messages while in state exchange).
$ rcvd->id = ++group->act_id_;\n
Note
This is an amazing way to solve the problem of the id coordination in multi-source systems. Otherwise a node will have to first get an id from central system or through a separate agreed protocol, and then use it for the packet, thereby doubling the round-trip time.
"},{"location":"certification.html#conflicts","title":"Conflicts","text":"The following happens if two nodes get ready with their packet at same time:
Both nodes will be allowed to put the packet on the channel. That means the channel will see packets from different nodes queued one behind another.
The following example shows what happens if two nodes modify same set of rows. Nodes are in sync until this point:
$ create -> insert (1,2,3,4)\n
Node 1: update i = i + 10;
Node 2: update i = i + 100;
Let\u2019s associate transaction ID (trx-id
) for an update transaction that is executed on Node 1 and Node 2 in parallel. Although the real algorithm is more involved (with uuid
+ seqno
), it is conceptually the same, so we are using trx_id
.
Node 1: update action: trx-id=n1x
Node 2: update action: trx-id=n2x
Both node packets are added to the channel, but the transactions are conflicting. The protocol says: FIRST WRITE WINS.
So in this case, whoever is first to write to the channel will get certified. Let\u2019s say Node 2 is first to write the packet, and then Node 1 makes changes immediately after it.
Note
Each node subscribes to all packages, including its own package.
Node 2 will see its own packet and will process it. Then it will see the packet from Node 1, try to certify it, and fail.
Node 1 will see the packet from Node 2 and will process it.
Note
InnoDB allows isolation, so Node 1 can process packets from Node 2 independent of Node 1 transaction changes
Then Node 1 will see its own packet, try to certify it, and fail.
Note
Even though the packet originated from Node 1, it will undergo certification to catch cases like these.
The certification protocol can be described using the previous example. The central certification vector (CCV) is updated to reflect reference transaction.
n2x
.Node 2 then gets the packet from Node 1 for certification. The packet key is already present in CCV, with the reference transaction set it to n2x
, whereas write-set proposes setting it to n1x
. This causes a conflict, which in turn causes the transaction from Node 1 to fail the certification test.
n2x
.Using the same case as explained above, Node 1 certification also rejects the packet from Node 1.
This suggests that the node doesn\u2019t need to wait for certification to complete, but just needs to ensure that the packet is written to the channel. The applier transaction will always win and the local conflicting transaction will be rolled back.
The following example shows what happens if one of the nodes has local changes that are not synced with the group:
mysql> create (id primary key) -> insert (1), (2), (3), (4);\n
Expected output node-1: wsrep_on=0; insert (5); wsrep_on=1\nnode-2: insert(5).\n
The insert(5)
statement will generate a write-set that will then be replicated to Node 1. Node 1 will try to apply it but will fail with duplicate-key-error
, because 5 already exist.
XtraDB will flag this as an error, which would eventually cause Node 1 to shutdown.
"},{"location":"certification.html#increment-gtid","title":"Increment GTID","text":"GTID is incremented only when the transaction passes certification, and is ready for commit. That way errant packets don\u2019t cause GTID to increment.
Also, group packet id
is not confused with GTID. Without errant packets, it may seem that these two counters are the same, but they are not related.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"compile.html","title":"Compile and install from Source Code","text":"If you want to compile Percona XtraDB Cluster, you can find the source code on GitHub. Before you begin, make sure that the following packages are installed:
apt yum Gitgit
git
SCons scons
scons
GCC gcc
gcc
g++ g++
gcc-c++
OpenSSL openssl
openssl
Check check
check
CMake cmake
cmake
Bison bison
bison
Boost libboost-all-dev
boost-devel
Asio libasio-dev
asio-devel
Async I/O libaio-dev
libaio-devel
ncurses libncurses5-dev
ncurses-devel
Readline libreadline-dev
readline-devel
PAM libpam-dev
pam-devel
socat socat
socat
curl libcurl-dev
libcurl-devel
You will likely have all or most of the packages already installed. If you are not sure, run one of the following commands to install any missing dependencies:
For Debian or Ubuntu:
$ sudo apt install -y git scons gcc g++ openssl check cmake bison \\\nlibboost-all-dev libasio-dev libaio-dev libncurses5-dev libreadline-dev \\\nlibpam-dev socat libcurl-dev\n
For Red Hat Enterprise Linux or CentOS:
$ sudo yum install -y git scons gcc gcc-c++ openssl check cmake bison \\\nboost-devel asio-devel libaio-devel ncurses-devel readline-devel pam-devel \\\nsocat libcurl-devel\n
To compile Percona XtraDB Cluster from source code:
Clone the Percona XtraDB Cluster repository:
$ git clone https://github.com/percona/percona-xtradb-cluster.git\n
Important
Clone the latest repository or update it to the latest state. Old codebase may not be compatible with the build script.
Check out the 8.0
branch and initialize submodules:
$ cd percona-xtradb-cluster\n$ git checkout 8.0\n$ git submodule update --init --recursive\n
Download the matching Percona XtraBackup 8.0 tarball (*.tar.gz) for your operating system from Percona Downloads.
The following example extract the Percona XtraBackup 8.0.32-25 tar.gz file to the target directory ./pxc-build
:
```{.bash data-prompt=\"$\"}\n$ tar -xvf percona-xtrabackup-8.0.32-25-Linux-x86_64.glibc2.17.tar.gz -C ./pxc-build\n```\n
Run the build script ./build-ps/build-binary.sh
. By default, it attempts building into the current directory. Specify the target output directory, such as ./pxc-build
:
$ mkdir ./pxc-build\n$ ./build-ps/build-binary.sh ./pxc-build\n
When the compilation completes, pxc-build
contains a tarball, such as Percona-XtraDB-Cluster-8.0.x86_64.tar.gz
, that you can deploy on your system.
Note
The exact version and release numbers may differ.
"},{"location":"compile.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"configure-cluster-rhel.html","title":"Configure a cluster on Red Hat-based distributions","text":"This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Red Hat or CentOS 7 servers, using the packages from Percona repositories.
Node 1
Host name: percona1
IP address: 192.168.70.71
Node 2
Host name: percona2
IP address: 192.168.70.72
Node 3
Host name: percona3
IP address: 192.168.70.73
The procedure described in this tutorial requires the following:
All three nodes have Red Hat or Red Hat or CentOS 7 installed.
The firewall on all nodes is configured to allow connecting to ports 3306, 4444, 4567 and 4568.
SELinux on all nodes is disabled.
Different from previous versions
The variable wsrep_sst_auth
has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user mysql.pxc.internal.session
. During SST, the user mysql.pxc.sst.user
and the role mysql.pxc.sst.role
are created on the donor node.
Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux or CentOS.
"},{"location":"configure-cluster-rhel.html#step-2-configuring-the-first-node","title":"Step 2. Configuring the first node","text":"Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.
Make sure that the configuration file /etc/my.cnf
on the first node (percona1
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended.\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 1 address\nwsrep_node_address=192.168.70.71\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n
Start the first node with the following command:
[root@percona1 ~] # systemctl start mysql@bootstrap.service\n
The previous command will start the cluster with initial wsrep_cluster_address
variable set to gcomm://
. If the node or MySQL are restarted later, there will be no need to change the configuration file.
After the first node has been started, cluster status can be checked with the following command:
mysql> show status like 'wsrep%';\n
This output shows that the cluster has been successfully bootstrapped.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 1 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n
Copy the automatically generated temporary password for the superuser account:
$ sudo grep 'temporary password' /var/log/mysqld.log\n
Use this password to log in as root:
$ mysql -u root -p\n
Change the password for the superuser account and log out. For example:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'r00tP@$$';\n
Expected output Query OK, 0 rows affected (0.00 sec)\n
Make sure that the configuration file /etc/my.cnf
on the second node (percona2
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 2 address\nwsrep_node_address=192.168.70.72\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the second node with the following command:
[root@percona2 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can be checked on both nodes. The following is an example of status from the second node (percona2
):
mysql> show status like 'wsrep%';\n
The output shows that the new node has been successfully added to the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 2 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
Make sure that the MySQL configuration file /etc/my.cnf
on the third node (percona3
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.73\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the third node with the following command:
[root@percona3 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can be checked on all three nodes. The following is an example of status from the third node (percona3
):
mysql> show status like 'wsrep%';\n
The output confirms that the third node has joined the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
To test replication, lets create a new database on second node, create a table for that database on the third node, and add some records to the table on the first node.
Create a new database on the second node:
mysql@percona2> CREATE DATABASE percona;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Switch to a newly created database:
mysql@percona3> USE percona;\n
The following output confirms that a database has been changed:
Expected outputDatabase changed\n
Create a table on the third node:
mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n
The following output confirms that a table has been created:
Expected outputQuery OK, 0 rows affected (0.05 sec)\n
Insert records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n
The following output confirms that the records have been inserted:
Expected outputQuery OK, 1 row affected (0.02 sec)\n
Retrieve all the rows from that table on the second node:
mysql@percona2> SELECT * FROM percona.example;\n
The following output confirms that all the rows have been retrieved:
Expected output+---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n| 1 | percona1 |\n+---------+-----------+\n1 row in set (0.00 sec)\n
This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"configure-cluster-ubuntu.html","title":"Configure a cluster on Debian or Ubuntu","text":"This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Ubuntu 14 LTS servers, using the packages from Percona repositories.
Node 1
Host name: pxc1
IP address: 192.168.70.61
Node 2
Host name: pxc2
IP address: 192.168.70.62
Node 3
Host name: pxc3
IP address: 192.168.70.63
The procedure described in this tutorial requires he following:
All three nodes have Ubuntu 14 LTS installed.
Firewall on all nodes is configured to allow connecting to ports 3306, 4444, 4567 and 4568.
AppArmor profile for MySQL is disabled.
Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Debian or Ubuntu.
Note
Debian/Ubuntu installation prompts for root password. For this tutorial, set it to Passw0rd
. After the packages have been installed, mysqld
will start automatically. Stop mysqld
on all three nodes using sudo systemctl stop mysql
.
Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.
Make sure that the configuration file /etc/mysql/my.cnf
for the first node (pxc1
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #1 address\nwsrep_node_address=192.168.70.61\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n
Start the first node with the following command:
[root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
This command will start the first node and bootstrap the cluster.
After the first node has been started, cluster status can be checked with the following command:
mysql> show status like 'wsrep%';\n
The following outut shows the cluste status:
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 1 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n
This output shows that the cluster has been successfully bootstrapped.
To perform State Snapshot Transfer using XtraBackup, set up a new user with proper privileges:
mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';\nmysql@pxc1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';\nmysql@pxc1> FLUSH PRIVILEGES;\n
Note
MySQL root account can also be used for performing SST, but it is more secure to use a different (non-root) user for this.
"},{"location":"configure-cluster-ubuntu.html#step-3-configure-the-second-node","title":"Step 3. Configure the second node","text":"Make sure that the configuration file /etc/mysql/my.cnf
on the second node (pxc2
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #2 address\nwsrep_node_address=192.168.70.62\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the second node with the following command:
[root@pxc2 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of status from the second node (pxc2
):
mysql> show status like 'wsrep%';\n
The following output shows that the new node has been successfully added to the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 2 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
Make sure that the MySQL configuration file /etc/mysql/my.cnf
on the third node (pxc3
) contains the following:
[mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.63\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
Start the third node with the following command:
[root@pxc3 ~]# systemctl start mysql\n
After the server has been started, it should receive SST automatically. Cluster status can be checked on all nodes. The following is an example of status from the third node (pxc3
):
mysql> show status like 'wsrep%';\n
The following output confirms that the third node has joined the cluster.
Expected output+----------------------------+--------------------------------------+\n| Variable_name | Value |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state | 4 |\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
"},{"location":"configure-cluster-ubuntu.html#test-replication","title":"Test replication","text":"To test replication, lets create a new database on the second node, create a table for that database on the third node, and add some records to the table on the first node.
Create a new database on the second node:
mysql@percona2> CREATE DATABASE percona;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Switch to a newly created database:
mysql@percona3> USE percona;\n
The following output confirms that a database has been changed:
Expected outputDatabase changed\n
Create a table on the third node:
mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n
The following output confirms that a table has been created:
Expected outputQuery OK, 0 rows affected (0.05 sec)\n
Insert records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n
The following output confirms that the records have been inserted:
Expected outputQuery OK, 1 row affected (0.02 sec)\n
Retrieve all the rows from that table on the second node:
mysql@percona2> SELECT * FROM percona.example;\n
The following output confirms that all the rows have been retrieved:
Expected output+---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n| 1 | percona1 |\n+---------+-----------+\n1 row in set (0.00 sec)\n
This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"configure-nodes.html","title":"Configure nodes for write-set replication","text":"After installing Percona XtraDB Cluster on each node, you need to configure the cluster. In this section, we will demonstrate how to configure a three node cluster:
Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63Stop the Percona XtraDB Cluster server. After the installation completes the server is not started. You need this step if you have started the server manually.
$ sudo service mysql stop\n
Edit the configuration file of the first node to provide the cluster settings.
If you use Debian or Ubuntu, edit /etc/mysql/mysql.conf.d/mysqld.cnf
:
wsrep_provider=/usr/lib/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n
If you use Red Hat or CentOS, edit /etc/my.cnf
. Note that on these systems you set the wsrep_provider option to a different value:
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n
Configure node 1.
wsrep_node_name=pxc1\nwsrep_node_address=192.168.70.61\npxc_strict_mode=ENFORCING\n
Set up node 2 and node 3 in the same way: Stop the server and update the configuration file applicable to your system. All settings are the same except for wsrep_node_name
and wsrep_node_address
.
For node 2
wsrep_node_name=pxc2\nwsrep_node_address=192.168.70.62\n
For node 3
wsrep_node_name=pxc3\nwsrep_node_address=192.168.70.63\n
Set up the traffic encryption settings. Each node of the cluster must use the same SSL certificates.
[mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n
Important
In Percona XtraDB Cluster 8.0, the Encrypting Replication Traffic is enabled by default (via the pxc-encrypt-cluster-traffic
variable).
The replication traffic encryption cannot be enabled on a running cluster. If it was disabled before the cluster was bootstrapped, the cluster must to stopped. Then set up the encryption, and bootstrap (see Bootstrapping the First Node
) again.
See also
More information about the security settings in Percona XtraDB Cluster * Security Basics
* Encrypting PXC Traffic
* SSL Automatic Configuration
Here is an example of a full configuration file installed on CentOS to /etc/my.cnf
.
# Template my.cnf for PXC\n# Edit to your requirements.\n[client]\nsocket=/var/lib/mysql/mysql.sock\n[mysqld]\nserver-id=1\ndatadir=/var/lib/mysql\nsocket=/var/lib/mysql/mysql.sock\nlog-error=/var/log/mysqld.log\npid-file=/var/run/mysqld/mysqld.pid\n# Binary log expiration period is 604800 seconds, which equals 7 days\nbinlog_expire_logs_seconds=604800\n######## wsrep ###############\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n# Cluster connection URL contains IPs of nodes\n#If no IP is found, this implies that a new cluster needs to be created,\n#in order to do that you need to bootstrap this node\nwsrep_cluster_address=gcomm://\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n# Slave thread to use\nwsrep_slave_threads=8\nwsrep_log_conflicts\n# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n# Node IP address\n#wsrep_node_address=192.168.70.63\n# Cluster name\nwsrep_cluster_name=pxc-cluster\n#If wsrep_node_name is not specified, then system hostname will be used\nwsrep_node_name=pxc-cluster-node-1\n#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER\npxc_strict_mode=ENFORCING\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
"},{"location":"configure-nodes.html#next-steps-bootstrap-the-first-node","title":"Next Steps: Bootstrap the first node","text":"After you configure all your nodes, initialize Percona XtraDB Cluster by bootstrapping the first node according to the procedure described in Bootstrapping the First Node.
"},{"location":"configure-nodes.html#essential-configuration-variables","title":"Essential configuration variables","text":"wsrep_provider
Specify the path to the Galera library. The location depends on the distribution:
Debian and Ubuntu: /usr/lib/galera4/libgalera_smm.so
Red Hat and CentOS: /usr/lib64/galera4/libgalera_smm.so
wsrep_cluster_name
Specify the logical name for your cluster. It must be the same for all nodes in your cluster.
wsrep_cluster_address
Specify the IP addresses of nodes in your cluster. At least one is required for a node to join the cluster, but it is recommended to list addresses of all nodes. This way if the first node in the list is not available, the joining node can use other addresses.
Note
No addresses are required for the initial node in the cluster. However, it is recommended to specify them and properly bootstrap the first node. This will ensure that the node is able to rejoin the cluster if it goes down in the future.
wsrep_node_name
Specify the logical name for each individual node. If this variable is not specified, the host name will be used.
wsrep_node_address
Specify the IP address of this particular node.
wsrep_sst_method
By default, Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer. xtrabackup-v2
is the only supported option for this variable. This method requires a user for SST to be set up on the initial node.
pxc_strict_mode
PXC Strict Mode is enabled by default and set to ENFORCING
, which blocks the use of tech preview features and unsupported features in Percona XtraDB Cluster.
binlog_format
Galera supports only row-level replication, so set binlog_format=ROW
.
default_storage_engine
Galera fully supports only the InnoDB storage engine. It will not work correctly with MyISAM or any other non-transactional storage engines. Set this variable to default_storage_engine=InnoDB
.
innodb_autoinc_lock_mode
Galera supports only interleaved (2
) lock mode for InnoDB. Setting the traditional (0
) or consecutive (1
) lock mode can cause replication to fail due to unresolved deadlocks. Set this variable to innodb_autoinc_lock_mode=2
.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"copyright-and-licensing-information.html","title":"Copyright and licensing information","text":""},{"location":"copyright-and-licensing-information.html#documentation-licensing","title":"Documentation licensing","text":"Percona XtraDB Cluster documentation is (C)2009-2023 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.
"},{"location":"copyright-and-licensing-information.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"crash-recovery.html","title":"Crash recovery","text":"Unlike the standard MySQL replication, a PXC cluster acts like one logical entity, which controls the status and consistency of each node as well as the status of the whole cluster. This allows maintaining the data integrity more efficiently than with traditional asynchronous replication without losing safe writes on multiple nodes at the same time.
However, there are scenarios where the database service can stop with no node being able to serve requests.
"},{"location":"crash-recovery.html#scenario-1-node-a-is-gracefully-stopped","title":"Scenario 1: Node A is gracefully stopped","text":"In a three node cluster (node A, Node B, node C), one node (node A, for example) is gracefully stopped: for the purpose of maintenance, configuration change, etc.
In this case, the other nodes receive a \u201cgood bye\u201d message from the stopped node and the cluster size is reduced; some properties like quorum calculation or auto increment are automatically changed. As soon as node A is started again, it joins the cluster based on its wsrep_cluster_address
variable in my.cnf
.
If the writeset cache (gcache.size
) on nodes B and/or C still has all the transactions executed while node A was down, joining is possible via IST. If IST is impossible due to missing transactions in donor\u2019s gcache, the fallback decision is made by the donor and SST is started automatically.
Similar to Scenario 1: Node A is gracefully stopped, the cluster size is reduced to 1 \u2014 even the single remaining node C forms the primary component and is able to serve client requests. To get the nodes back into the cluster, you just need to start them.
However, when a new node joins the cluster, node C will be switched to the \u201cDonor/Desynced\u201d state as it has to provide the state transfer at least to the first joining node. It is still possible to read/write to it during that process, but it may be much slower, which depends on how large amount of data should be sent during the state transfer. Also, some load balancers may consider the donor node as not operational and remove it from the pool. So, it is best to avoid the situation when only one node is up.
If you restart node A and then node B, you may want to make sure note B does not use node A as the state transfer donor: node A may not have all the needed writesets in its gcache. Specify node C node as the donor in your configuration file and start the mysql service:
$ systemctl start mysql\n
See also
Galera Documentation: wsrep_sst_donor option
"},{"location":"crash-recovery.html#scenario-3-all-three-nodes-are-gracefully-stopped","title":"Scenario 3: All three nodes are gracefully stopped","text":"The cluster is completely stopped and the problem is to initialize it again. It is important that a PXC node writes its last executed position to the grastate.dat
file.
By comparing the seqno number in this file, you can see which is the most advanced node (most likely the last stopped). The cluster must be bootstrapped using this node, otherwise nodes that had a more advanced position will have to perform the full SST to join the cluster initialized from the less advanced one. As a result, some transactions will be lost). To bootstrap the first node, invoke the startup script like this:
$ systemctl start mysql@bootstrap.service\n
Note
Even though you bootstrap from the most advanced node, the other nodes have a lower sequence number. They will still have to join via the full SST because the Galera Cache is not retained on restart.
For this reason, it is recommended to stop writes to the cluster before its full shutdown, so that all nodes can stop at the same position. See also pc.recovery
.
This is the case when one node becomes unavailable due to power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, etc.
Two remaining nodes notice the connection to node A is down and start trying to re-connect to it. After several timeouts, node A is removed from the cluster. The quorum is saved (2 out of 3 nodes are up), so no service disruption happens. After it is restarted, node A joins automatically (as described in Scenario 1: Node A is gracefully stopped).
"},{"location":"crash-recovery.html#scenario-5-two-nodes-disappear-from-the-cluster","title":"Scenario 5: Two nodes disappear from the cluster","text":"Two nodes are not available and the remaining node (node C) is not able to form the quorum alone. The cluster has to switch to a non-primary mode, where MySQL refuses to serve any SQL queries. In this state, the mysqld process on node C is still running and can be connected to but any statement related to data fails with an error
> SELECT * FROM test.sbtest1;\n
The error message ERROR 1047 (08S01): WSREP has not yet prepared node for application use\n
Reads are possible until node C decides that it cannot access node A and node B. New writes are forbidden.
As soon as the other nodes become available, the cluster is formed again automatically. If node B and node C were just network-severed from node A, but they can still reach each other, they will keep functioning as they still form the quorum.
If node A and node B crashed, you need to enable the primary component on node C manually, before you can bring up node A and node B. The command to do this is:
> SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n
This approach only works if the other nodes are down before doing that! Otherwise, you end up with two clusters having different data.
See also
Adding Nodes to Cluster
"},{"location":"crash-recovery.html#scenario-6-all-nodes-went-down-without-a-proper-shutdown-procedure","title":"Scenario 6: All nodes went down without a proper shutdown procedure","text":"This scenario is possible in case of a datacenter power failure or when hitting a MySQL or Galera bug. Also, it may happen as a result of data consistency being compromised where the cluster detects that each node has different data. The grastate.dat
file is not updated and does not contain a valid sequence number (seqno). It may look like this:
$ cat /var/lib/mysql/grastate.dat\n# GALERA saved state\nversion: 2.1\nuuid: 220dcdcb-1629-11e4-add3-aec059ad3734\nseqno: -1\nsafe_to_bootstrap: 0\n
In this case, you cannot be sure that all nodes are consistent with each other. We cannot use safe_to_bootstrap variable to determine the node that has the last transaction committed as it is set to 0 for each node. An attempt to bootstrap from such a node will fail unless you start mysqld
with the --wsrep-recover
parameter:
$ mysqld --wsrep-recover\n
Search the output for the line that reports the recovered position after the node UUID (1122 in this case):
Expected output...\n... [Note] WSREP: Recovered position: 220dcdcb-1629-11e4-add3-aec059ad3734:1122\n...\n
The node where the recovered position is marked by the greatest number is the best bootstrap candidate. In its grastate.dat
file, set the safe_to_bootstrap variable to 1. Then, bootstrap from this node.
Note
After a shutdown, you can boostrap from the node which is marked as safe in the grastate.dat
file.
...\nsafe_to_bootstrap: 1\n...\n
See also
Galera Documentation Introducing the Safe-To-Bootstrap feature in Galera Cluster
In recent Galera versions, the option pc.recovery
(enabled by default) saves the cluster state into a file named gvwstate.dat
on each member node. As the name of this option suggests (pc \u2013 primary component), it saves only a cluster being in the PRIMARY state. An example content of the file may look like this:
cat /var/lib/mysql/gvwstate.dat\nmy_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9\n#vwbeg\nview_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3\nbootstrap: 0\nmember: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0\nmember: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0\nmember: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0\n#vwend\n
We can see a three node cluster with all members being up. Thanks to this new feature, the nodes will try to restore the primary component once all the members start to see each other. This makes the PXC cluster automatically recover from being powered down without any manual intervention! In the logs we will see:
"},{"location":"crash-recovery.html#scenario-7-the-cluster-loses-its-primary-state-due-to-split-brain","title":"Scenario 7: The cluster loses its primary state due to split brain","text":"For the purpose of this example, let\u2019s assume we have a cluster that consists of an even number of nodes: six, for example. Three of them are in one location while the other three are in another location and they lose network connectivity. It is best practice to avoid such topology: if you cannot have an odd number of real nodes, you can use an additional arbitrator (garbd) node or set a higher pc.weight to some nodes. But when the split brain happens any way, none of the separated groups can maintain the quorum: all nodes must stop serving requests and both parts of the cluster will be continuously trying to re-connect.
If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command as described in Scenario 5: Two nodes disappear from the cluster
> SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n
After this, you are able to work on the manually restored part of the cluster, and the other half should be able to automatically re-join using IST as soon as the network link is restored.
Warning
If you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring a network link in this case will not make them re-join until the nodes are restarted and members specified in configuration file are connected again.
Then, as the Galera replication model truly cares about data consistency: once the inconsistency is detected, nodes that cannot execute row change statement due to a data difference \u2013 an emergency shutdown will be performed and the only way to bring the nodes back to the cluster is via the full SST
Based on material from Percona Database Performance Blog
This article is based on the blog post Galera replication - how to recover a PXC cluster by Przemys\u0142aw Malkowski: https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/
"},{"location":"crash-recovery.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"data-at-rest-encryption.html","title":"Data at Rest Encryption","text":""},{"location":"data-at-rest-encryption.html#introduction","title":"Introduction","text":"Data at rest encryption refers to encrypting data stored on a disk on a server. If an unauthorized user accesses the data files from the file system, encryption ensures the user cannot read the file contents. Percona Server allows you to enable, disable, and apply encryptions to the following objects:
File-per-tablespace table
Schema
General tablespace
System tablespace
Temporary table
Binary log files
Redo log files
Undo tablespaces
Doublewrite buffer files
The transit data is defined as data that is transmitted to another node or client. Encrypted transit data uses an SSL connection.
Percona XtraDB Cluster 8.0 supports all data at rest generally-available encryption features available from Percona Server for MySQL 8.0.
"},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_file-plugin","title":"Configure PXC to use keyring_file plugin","text":""},{"location":"data-at-rest-encryption.html#configuration","title":"Configuration","text":"Percona XtraDB Cluster inherits the Percona Server for MySQL behavior to configure the keyring_file
plugin. The following example illustrates using the plugin. Review Use the kerying component or keyring plugin for the latest information on the keyring component and plugin.
Note
The keyring_file plugin should not be used for regulatory compliance.
Install the plugin and add the following options in the configuration file:
[mysqld]\nearly-plugin-load=keyring_file.so\nkeyring_file_data=<PATH>/keyring\n
The SHOW PLUGINS
statement checks if the plugin has been successfully loaded.
Note
PXC recommends the same configuration on all cluster nodes, and all nodes should have the keyring configured. A mismatch in the keyring configuration does not allow the JOINER node to join the cluster.
If the user has a bootstrapped node with keyring enabled, then upcoming cluster nodes inherit the keyring (the encrypted key) from the DONOR node.
"},{"location":"data-at-rest-encryption.html#usage","title":"Usage","text":"XtraBackup re-encrypts the data using a transition-key and the JOINER node re-encrypts it using a newly generated master-key.
Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible, as in higher version JOINER can join from lower version DONOR, but not vice-versa.
Percona XtraDB Cluster does not allow the combination of nodes with encryption and nodes without encryption to maintain data consistency. For example, the user creates node-1 with encryption (keyring) enabled and node-2 with encryption (keyring) disabled. If the user attempts to create a table with encryption on node-1, the creation fails on node-2, causing data inconsistency. A node fails to start if it fails to load the keyring plugin.
Note
If the user does not specify the keyring parameters, the node does not know that it must load the keyring. The JOINER node may start, but it eventually shuts down when the DML level inconsistency with encrypted tablespace is detected.
If a node does not have an encrypted tablespace, the keyring is not generated, and the keyring file is empty. Creating an encrypted table on the node generates the keyring.
In an operation that is local to the node, you can rotate the key as needed. The ALTER INSTANCE ROTATE INNODB MASTER KEY
statement is not replicated on cluster.
The JOINER node generates its keyring.
"},{"location":"data-at-rest-encryption.html#compatibility","title":"Compatibility","text":"Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible. A higher version JOINER can join from lower version DONOR, but not vice-versa.
"},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_vault-plugin","title":"Configure PXC to use keyring_vault plugin","text":""},{"location":"data-at-rest-encryption.html#keyring_vault","title":"keyring_vault","text":"The keyring_vault
plugin allows storing the master-key in vault-server (vs. local file as in case of keyring_file
).
Warning
The rsync tool does not support the keyring_vault
. Any rysnc-SST on a joiner is aborted if the keyring_vault
is configured.
Configuration options are the same as upstream. The my.cnf
configuration file should contain the following options:
[mysqld]\nearly-plugin-load=\"keyring_vault=keyring_vault.so\"\nkeyring_vault_config=\"<PATH>/keyring_vault_n1.conf\"\n
Also, keyring_vault_n1.conf
file should contain the following:
vault_url = http://127.0.0.1:8200\nsecret_mount_point = secret1\ntoken = e0345eb4-35dd-3ddd-3b1e-e42bb9f2525d\nvault_ca = /data/keyring_vault_confs/vault_ca.crt\n
The detailed description of these options can be found in the upstream documentation.
Vault-server is an external server, so make sure the PXC node can reach the server.
Note
Percona XtraDB Cluster recommends using the same keyring_plugin type on all cluster nodes. Mixing the keyring plugin types is recommended only while transitioning from keyring_file
-> keyring_vault
or vice-versa.
All nodes do not need to refer to the same vault server. Whatever vault server is used, it must be accessible from the respective node. All nodes do not need to use the same mount point.
If the node is not able to reach or connect to the vault server, an error is notified during the server boot, and the node refuses to start:
The warning message2018-05-29T03:54:33.859613Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:54:33.977145Z 0 [ERROR] Plugin keyring_vault reported:\n'CURL returned this error code: 7 with error message : Failed to connect\nto 127.0.0.1 port 8200: Connection refused'\n
If some nodes of the cluster are unable to connect to vault-server, this relates only to these specific nodes: e.g., if node-1 can connect, and node-2 cannot connect, only node-2 refuses to start. Also, if the server has a pre-existing encrypted object and on reboot, the server fails to connect to the vault-server, the object is not accessible.
In case when vault-server is accessible, but authentication credential is incorrect, the consequences are the same, and the corresponding error looks like the following:
The warning message2018-05-29T03:58:54.461911Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:58:54.577477Z 0 [ERROR] Plugin keyring_vault reported:\n'Could not retrieve list of keys from Vault. Vault has returned the\nfollowing error(s): [\"permission denied\"]'\n
In case of an accessible vault-server with the wrong mount point, there is no error during server boot, but the node still refuses to start:
mysql> CREATE TABLE t1 (c1 INT, PRIMARY KEY pk(c1)) ENCRYPTION='Y';\n
Expected output ERROR 3185 (HY000): Can't find master key from keyring, please check keyring\nplugin is loaded.\n\n... [ERROR] Plugin keyring_vault reported: 'Could not write key to Vault. ...\n... [ERROR] Plugin keyring_vault reported: 'Could not flush keys to keyring'\n
"},{"location":"data-at-rest-encryption.html#mix-keyring-plugin-types","title":"Mix keyring plugin types","text":"With XtraBackup introducing transition-key logic, it is now possible to mix and match keyring plugins. For example, the user has node-1 configured to use the keyring_file
plugin and node-2 configured to use keyring_vault
.
Note
Percona recommends the same configuration for all the nodes of the cluster. A mix and match in keyring plugin types is recommended only during the transition from one keying type to another.
"},{"location":"data-at-rest-encryption.html#temporary-file-encryption","title":"Temporary file encryption","text":""},{"location":"data-at-rest-encryption.html#migrate-keys-between-keyring-keystores","title":"Migrate keys between keyring keystores","text":"Percona XtraDB Cluster supports key migration between keystores. The migration can be performed offline or online.
"},{"location":"data-at-rest-encryption.html#offline-migration","title":"Offline migration","text":"In offline migration, the node to migrate is shut down, and the migration server takes care of migrating keys for the said server to a new keystore.
For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file
. To migrate the n2 node to use keyring_vault
, use the following procedure:
Shut down the n2 node.
Start the Migration Server (mysqld
with a special option).
The Migration Server copies the keys from the n2 keyring file and adds them to the vault server.
Start the n2 node with the vault parameter, and the keys are available.
Here is how the migration server output should look like:
Expected output/dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node2/keyring \\\n--keyring-migration-destination=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/vault/keyring_vault.cnf &\n\n... [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use\n --explicit_defaults_for_timestamp server option (see documentation for more details).\n... [Note] --secure-file-priv is set to NULL. Operations related to importing and\n exporting data are disabled\n... [Warning] WSREP: Node is not a cluster node. Disabling pxc_strict_mode\n... [Note] /dev/shm/pxc80/bin/mysqld (mysqld 8.0-debug) starting as process 5710 ...\n... [Note] Keyring migration successful.\n
On a successful migration, the destination keystore receives additional migrated keys (pre-existing keys in the destination keystore are not touched or removed). The source keystore retains the keys as the migration performs a copy operation and not a move operation.
If the migration fails, the destination keystore is unchanged.
"},{"location":"data-at-rest-encryption.html#online-migration","title":"Online migration","text":"In online migration, the node to migrate is kept running, and the migration server takes care of migrating keys for the said server to a new keystore by connecting to the node.
For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file
. Migrate the n3 node to use keyring_vault
using the following procedure:
Start the Migration Server (mysqld
with a special option).
The Migration Server copies the keys from the n3 keyring file and adds them to the vault server.
Restart the n3 node with the vault parameter, and the keys are available.
/dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/keyring_vault3.cnf \\\n--keyring-migration-destination=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node3/keyring \\\n--keyring-migration-host=localhost \\\n--keyring-migration-user=root \\\n--keyring-migration-port=16300 \\\n--keyring-migration-password='' &\n
On a successful migration, the destination keystore receives the additional migrated keys. Any pre-existing keys in the destination keystore are unchanged. The source keystore retains the keys as the migration performs a copy operation and not a move operation.
If the migration fails, the destination keystore is not changed.
"},{"location":"data-at-rest-encryption.html#migration-server-options","title":"Migration server options","text":"--keyring-migration-source
: The source keyring plugin that manages the keys to be migrated.
--keyring-migration-destination
: The destination keyring plugin to which the migrated keys are to be copied
Note
For offline migration, no additional key migration options are needed.
--keyring-migration-host
: The host where the running server is located. This host is always the local host.
--keyring-migration-user
, --keyring-migration-password
: The username and password for the account used to connect to the running server.
--keyring-migration-port
: Used for TCP/IP connections, the running server\u2019s port number used to connect.
--keyring-migration-socket
: Used for Unix socket file or Windows named pipe connections, the running server socket or named pipe used to connect.
Prerequisite for migration:
Make sure to pass required keyring options and other configuration parameters for the two keyring plugins. For example, if keyring_file
is one of the plugins, you must explicitly configure the keyring_file_data
system variable in the my.cnf file.
Other non-keyring options may be required as well. One way to specify these options is by using --defaults-file
to name an option file that contains the required options.
[mysqld]\nbasedir=/dev/shm/pxc80\ndatadir=/dev/shm/pxc80/copy_mig\nlog-error=/dev/shm/pxc80/logs/copy_mig.err\nsocket=/tmp/copy_mig.sock\nport=16400\n
See also
Encrypt traffic documentation
Percona Server for MySQL Documentation: Data-at-Rest Encryption https://www.percona.com/doc/percona-server/8.0/security/data-at-rest-encryption.html#data-at-rest-encryption
"},{"location":"data-at-rest-encryption.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"docker.html","title":"Running Percona XtraDB Cluster in a Docker Container","text":"Docker images of Percona XtraDB Cluster are hosted publicly on Docker Hub at https://hub.docker.com/r/percona/percona-xtradb-cluster/.
For more information about using Docker, see the Docker Docs. Make sure that you are using the latest version of Docker. The ones provided via apt
and yum
may be outdated and cause errors.
We gather Telemetry data in the Percona packages and Docker images.
Note
By default, Docker pulls the image from Docker Hub if the image is not available locally.
The image contains only the most essential binaries for Percona XtraDB Cluster to run. Some utilities included in a Percona Server for MySQL or MySQL installation might be missing from the Percona XtraDB Cluster Docker image.
The following procedure describes how to set up a simple 3-node cluster for evaluation and testing purposes. Do not use these instructions in a production environment because the MySQL certificates generated in this procedure are self-signed. For a production environment, you should generate and store the certificates to be used by Docker.
In this procedure, all of the nodes run Percona XtraDB Cluster 8.0 in separate containers on one host:
Create a ~/pxc-docker-test/config directory.
Create a custom.cnf file with the following contents, and place the file in the new directory:
[mysqld]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n\n[client]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/client-cert.pem\nssl-key = /cert/client-key.pem\n\n[sst]\nencrypt = 4\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n
Create a cert directory and generate self-signed SSL certificates on the host node:
$ mkdir -m 777 -p ~/pxc-docker-test/cert\ndocker run --name pxc-cert --rm -v ~/pxc-docker-test/cert:/cert\npercona/percona-xtradb-cluster:8.0 mysql_ssl_rsa_setup -d /cert\n
Create a Docker network:
$ docker network create pxc-network\n
Bootstrap the cluster (create the first node):
$ docker run -d \\\n -e MYSQL_ROOT_PASSWORD=test1234# \\\n -e CLUSTER_NAME=pxc-cluster1 \\\n --name=pxc-node1 \\\n --net=pxc-network \\\n -v ~/pxc-docker-test/cert:/cert \\\n -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n percona/percona-xtradb-cluster:8.0\n
Join the second node:
$ docker run -d \\\n -e MYSQL_ROOT_PASSWORD=test1234# \\\n -e CLUSTER_NAME=pxc-cluster1 \\\n -e CLUSTER_JOIN=pxc-node1 \\\n --name=pxc-node2 \\\n --net=pxc-network \\\n -v ~/pxc-docker-test/cert:/cert \\\n -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n percona/percona-xtradb-cluster:8.0\n
Join the third node:
$ docker run -d \\\n -e MYSQL_ROOT_PASSWORD=test1234# \\\n -e CLUSTER_NAME=pxc-cluster1 \\\n -e CLUSTER_JOIN=pxc-node1 \\\n --name=pxc-node3 \\\n --net=pxc-network \\\n -v ~/pxc-docker-test/cert:/cert \\\n -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n percona/percona-xtradb-cluster:8.0\n
To verify the cluster is available, do the following:
Access the MySQL client. For example, on the first node:
$ sudo docker exec -it pxc-node1 /usr/bin/mysql -uroot -ptest1234#\n
Expected output mysql: [Warning] Using a password on the command line interface can be insecure.\nWelcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 12\n...\nYou are enforcing ssl connection via unix socket. Please consider\nswitching ssl off as it does not make connection via unix socket\nany more secure\n\nmysql>\n
View the wsrep status variables:
mysql> show status like 'wsrep%';\n
Expected output +------------------------------+-------------------------------------------------+\n| Variable_name | Value |\n+------------------------------+-------------------------------------------------+\n| wsrep_local_state_uuid | 625318e2-9e1c-11e7-9d07-aee70d98d8ac |\n...\n| wsrep_local_state_comment | Synced |\n...\n| wsrep_incoming_addresses | 172.18.0.2:3306,172.18.0.3:3306,172.18.0.4:3306 |\n...\n| wsrep_cluster_conf_id | 3 |\n| wsrep_cluster_size | 3 |\n| wsrep_cluster_state_uuid | 625318e2-9e1c-11e7-9d07-aee70d98d8ac |\n| wsrep_cluster_status | Primary |\n| wsrep_connected | ON |\n...\n| wsrep_ready | ON |\n+------------------------------+-------------------------------------------------+\n59 rows in set (0.02 sec)\n
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"encrypt-traffic.html","title":"Encrypt PXC traffic","text":"There are two kinds of traffic in Percona XtraDB Cluster:
Client-server traffic (the one between client applications and cluster nodes),
Replication traffic, that includes SST, IST, write-set replication, and various service messages.
Percona XtraDB Cluster supports encryption for all types of traffic. Replication traffic encryption can be configured either automatically or manually.
"},{"location":"encrypt-traffic.html#encrypt-client-server-communication","title":"Encrypt client-server communication","text":"Percona XtraDB Cluster uses the underlying MySQL encryption mechanism to secure communication between client applications and cluster nodes.
MySQL generates default key and certificate files and places them in the data directory. You can override auto-generated files with manually created ones, as described in the section Generate keys and certificates manually.
The auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes.
Specify the following settings in the my.cnf
configuration file for each node:
[mysqld]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n\n[client]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/client-cert.pem\nssl-key=/etc/mysql/certs/client-key.pem\n
After it is restarted, the node uses these files to encrypt communication with clients. MySQL clients require only the second part of the configuration to communicate with cluster nodes.
MySQL generates the default key and certificate files and places them in the data directory. You can either use them or generate new certificates. For generation of new certificate please refer to Generate keys and certificates manually section.
"},{"location":"encrypt-traffic.html#encrypt-replication-traffic","title":"Encrypt replication traffic","text":"Replication traffic refers to the inter-node traffic which includes the SST traffic, IST traffic, and replication traffic.
The traffic of each type is transferred via a different channel, and so it is important to configure secure channels for all 3 variants to completely secure the replication traffic.
Percona XtraDB Cluster supports a single configuration option which helps to secure the complete replication traffic, and is often referred to as SSL automatic configuration. You can also configure the security of each channel by specifying independent parameters.
"},{"location":"encrypt-traffic.html#ssl-automatic-configuration","title":"SSL automatic configuration","text":"The automatic configuration of the SSL encryption needs a key and certificate files. MySQL generates a default key and certificate files and places them in the data directory.
Important
It is important that your cluster use the same SSL certificates on all nodes.
"},{"location":"encrypt-traffic.html#enable-pxc-encrypt-cluster-traffic","title":"Enablepxc-encrypt-cluster-traffic
","text":"Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic
variable that enables automatic configuration of SSL encryption there-by encrypting SST, IST, and replication traffic.
By default, pxc-encrypt-cluster-traffic
is enabled thereby using a secured channel for replication. This variable is not dynamic and so it cannot be changed at runtime.
Enabled, pxc-encrypt-cluster-traffic
has the effect of applying the following settings: encrypt, ssl_key, ssl-ca, ssl-cert.
Setting pxc-encrypt-cluster-traffic=ON
has the effect of applying the following settings in the my.cnf
configuration file:
[mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n
For wsrep_provider_options
, only the mentioned options are affected (socket.ssl_key
, socket,ssl_cert
, and socket.ssl_ca
), the rest is not modified.
Important
Disabling pxc-encrypt-cluster-traffic
The default value of pxc-encrypt-cluster-traffic
helps improve the security of your system.
When pxc-encrypt-cluster-traffic
is not enabled, anyone with the access to your network can connect to any PXC node either as a client or as another node joining the cluster. This potentially lets them query your data or get a complete copy of it.
If you must disable pxc-encrypt-cluster-traffic
, you need to stop the cluster and update [mysqld]
section of the configuration file: pxc-encrypt-cluster-traffic=OFF
of each node. Then, restart the cluster.
The automatic configuration of the SSL encryption needs key and certificate files. MySQL generates default key and certificate files and places them in data directory. These auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes. Also you can override auto-generated files with manually created ones, as covered in Generate keys and certificates manually.
The necessary key and certificate files are first searched at the ssl-ca
, ssl-cert
, and ssl-key
options under [mysqld]
. If these options are not set, the data directory is searched for ca.pem
, server-cert.pem
, and server-key.pem
files.
Note
The [sst]
section is not searched.
If all three files are found, they are used to configure encryption. If any of the files is missing, a fatal error is generated.
"},{"location":"encrypt-traffic.html#ssl-manual-configuration","title":"SSL manual configuration","text":"If user wants to enable encryption for specific channel only or use different certificates or other mix-match, then user can opt for manual configuration. This helps to provide more flexibility to end-users.
To enable encryption manually, the location of the required key and certificate files shoud be specified in the Percona XtraDB Cluster configuration. If you do not have the necessary files, see Generate keys and certificates manually.
Note
Encryption settings are not dynamic. To enable it on a running cluster, you need to restart the entire cluster.
There are three aspects of Percona XtraDB Cluster operation, where you can enable encryption:
Encrypt SST traffic
This refers to SST traffic during full data copy from one cluster node (donor) to the joining node (joiner).
Encrypt replication traffic
Encrypt IST traffic
This refers to all internal Percona XtraDB Cluster communication, such as, write-set replication, IST, and various service messages.
This refers to full data transfer that usually occurs when a new node (JOINER) joins the cluster and receives data from an existing node (DONOR).
For more information, see State snapshot transfer.
Note
If keyring_file
plugin is used, then SST encryption is mandatory: when copying encrypted data via SST, the keyring must be sent over with the files for decryption. In this case following options are to be set in my.cnf
on all nodes:
early-plugin-load=keyring_file.so\nkeyring-file-data=/path/to/keyring/file\n
The cluster will not work if keyring configuration across nodes is different.
The only available SST method is xtrabackup-v2
which uses Percona XtraBackup.
This is the only available SST method (the wsrep_sst_method
is always set to xtrabackup-v2
), which uses Percona XtraBackup to perform non-blocking transfer of files. For more information, see Percona XtraBackup SST Configuration.
Encryption mode for this method is selected using the encrypt
option:
encrypt=0
is the default value, meaning that encryption is disabled.
encrypt=4
enables encryption based on key and certificate files generated with OpenSSL. For more information, see Generating Keys and Certificates Manually.
To enable encryption for SST using XtraBackup, specify the location of the keys and certificate files in the each node\u2019s configuration under [sst]
:
[sst]\nencrypt=4\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n
Note
SSL clients require DH parameters to be at least 1024 bits, due to the logjam vulnerability. However, versions of socat
earlier than 1.7.3 use 512-bit parameters. If a dhparams.pem
file of required length is not found during SST in the data directory, it is generated with 2048 bits, which can take several minutes. To avoid this delay, create the dhparams.pem
file manually and place it in the data directory before joining the node to the cluster:
$ openssl dhparam -out /path/to/datadir/dhparams.pem 2048\n
For more information, see this blog post.
"},{"location":"encrypt-traffic.html#encrypt-replicationist-traffic","title":"Encrypt replication/IST traffic","text":"Replication traffic refers to the following:
Write-set replication which is the main workload of Percona XtraDB Cluster (replicating transactions that execute on one node to all other nodes).
Incremental State Transfer (IST) which is copying only missing transactions from DONOR to JOINER node.
Service messages which ensure that all nodes are synchronized.
All this traffic is transferred via the same underlying communication channel (gcomm
). Securing this channel will ensure that IST traffic, write-set replication, and service messages are encrypted. (For IST, a separate channel is configured using the same configuration parameters, so 2 sections are described together).
To enable encryption for all these processes, define the paths to the key, certificate and certificate authority files using the following wsrep provider options:
socket.ssl_ca
socket.ssl_cert
socket.ssl_key
To set these options, use the wsrep_provider_options
variable in the configuration file:
$ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/ca.pem;socket.ssl_cert=/etc/mysql/certs/server-cert.pem;socket.ssl_key=/etc/mysql/certs/server-key.pem\"\n
Note
You must use the same key and certificate files on all nodes, preferably those used for Encrypt client-server communication.
Check upgrade-certificate section on how to upgrade existing certificates.
"},{"location":"encrypt-traffic.html#generate-keys-and-certificates-manually","title":"Generate keys and certificates manually","text":"As mentioned above, MySQL generates default key and certificate files and places them in the data directory. If you want to override these certificates, the following new sets of files can be generated:
Certificate Authority (CA) key and certificate to sign the server and client certificates.
Server key and certificate to secure database server activity and write-set replication traffic.
Client key and certificate to secure client communication traffic.
These files should be generated using OpenSSL.
Note
The Common Name
value used for the server and client keys and certificates must differ from that value used for the CA certificate.
The Certificate Authority is used to verify the signature on certificates.
Generate the CA key file:
$ openssl genrsa 2048 > ca-key.pem\n
Generate the CA certificate file:
$ openssl req -new -x509 -nodes -days 3600\n -key ca-key.pem -out ca.pem\n
Generate the server key file:
$ openssl req -newkey rsa:2048 -days 3600 \\\n -nodes -keyout server-key.pem -out server-req.pem\n
Remove the passphrase:
$ openssl rsa -in server-key.pem -out server-key.pem\n
Generate the server certificate file:
$ openssl x509 -req -in server-req.pem -days 3600 \\\n -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n -out server-cert.pem\n
Generate the client key file:
$ openssl req -newkey rsa:2048 -days 3600 \\\n -nodes -keyout client-key.pem -out client-req.pem\n
Remove the passphrase:
$ openssl rsa -in client-key.pem -out client-key.pem\n
Generate the client certificate file:
$ openssl x509 -req -in client-req.pem -days 3600 \\\n -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n -out client-cert.pem\n
To verify that the server and client certificates are correctly signed by the CA certificate, run the following command:
$ openssl verify -CAfile ca.pem server-cert.pem client-cert.pem\n
If the verification is successful, you should see the following output:
server-cert.pem: OK\nclient-cert.pem: OK\n
"},{"location":"encrypt-traffic.html#failed-validation-caused-by-matching-cn","title":"Failed validation caused by matching CN","text":"Sometimes, an SSL configuration may fail if the certificate and the CA files contain the same .
To check if this is the case run openssl
command as follows and verify that the CN field differs for the Subject and Issuer lines.
$ openssl x509 -in server-cert.pem -text -noout\n
Incorrect values
Certificate:\nData:\nVersion: 1 (0x0)\nSerial Number: 1 (0x1)\nSignature Algorithm: sha256WithRSAEncryption\nIssuer: CN=www.percona.com, O=Database Performance., C=US\n...\nSubject: CN=www.percona.com, O=Database Performance., C=AU\n...\n
To obtain a more compact output run openssl
specifying -subject and -issuer parameters:
$ openssl x509 -in server-cert.pem -subject -issuer -noout\n
Expected output subject= /CN=www.percona.com/O=Database Performance./C=AU\nissuer= /CN=www.percona.com/O=Database Performance./C=US\n
"},{"location":"encrypt-traffic.html#deploy-keys-and-certificates","title":"Deploy keys and certificates","text":"Use a secure method (for example, scp
or sftp
) to send the key and certificate files to each node. Place them under the /etc/mysql/certs/
directory or similar location where you can find them later.
Note
Make sure that this directory is protected with proper permissions. Most likely, you only want to give read permissions to the user running mysqld
.
The following files are required:
ca.pem
)This file is used to verify signatures.
server-key.pem
and server-cert.pem
)These files are used to secure database server activity and write-set replication traffic.
client-key.pem
and client-cert.pem
)These files are required only if the node should act as a MySQL client. For example, if you are planning to perform SST using mysqldump
.
Note
Upgrade certificates subsection covers the details on upgrading certificates, if necessary.
"},{"location":"encrypt-traffic.html#upgrade-certificates","title":"Upgrade certificates","text":"The following procedure shows how to upgrade certificates used for securing replication traffic when there are two nodes in the cluster.
Restart the first node with the socket.ssl_ca
option set to a combination of the the old and new certificates in a single file.
For example, you can merge contents of old-ca.pem
and new-ca.pem
into upgrade-ca.pem
as follows:
$ cat old-ca.pem > upgrade-ca.pem && \\\ncat new-ca.pem >> upgrade-ca.pem\n
Set the wsrep_provider_options
variable as follows:
$ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/upgrade-ca.pem;socket.ssl_cert=/etc/mysql/certs/old-cert.pem;socket.ssl_key=/etc/mysql/certs/old-key.pem\"\n
Restart the second node with the socket.ssl_ca
, socket.ssl_cert
, and socket.ssl_key
options set to the corresponding new certificate files.
$ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/new-ca.pem;socket.ssl_cert=/etc/mysql/certs/new-cert.pem;socket.ssl_key=/etc/mysql/certs/new-key.pem\"\n
Restart the first node with the new certificate files, as in the previous step.
You can remove the old certificate files.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"failover.html","title":"Cluster failover","text":"Cluster membership is determined simply by which nodes are connected to the rest of the cluster; there is no configuration setting explicitly defining the list of all possible cluster nodes. Therefore, every time a node joins the cluster, the total size of the cluster is increased and when a node leaves (gracefully) the size is decreased.
The size of the cluster is used to determine the required votes to achieve quorum. A quorum vote is done when a node or nodes are suspected to no longer be part of the cluster (they do not respond). This no response timeout is the evs.suspect_timeout
setting in the wsrep_provider_options
(default 5 sec), and when a node goes down ungracefully, write operations will be blocked on the cluster for slightly longer than that timeout.
Once a node (or nodes) is determined to be disconnected, then the remaining nodes cast a quorum vote, and if the majority of nodes from before the disconnect are still still connected, then that partition remains up. In the case of a network partition, some nodes will be alive and active on each side of the network disconnect. In this case, only the quorum will continue. The partition(s) without quorum will change to non-primary state.
As a consequence, it\u2019s not possible to have safe automatic failover in a 2 node cluster, because failure of one node will cause the remaining node to become non-primary. Moreover, any cluster with an even number of nodes (say two nodes in two different switches) have some possibility of a split brain situation, when neither partition is able to retain quorum if connection between them is lost, and so they both become non-primary.
Therefore, for automatic failover, the rule of 3s is recommended. It applies at various levels of your infrastructure, depending on how far the cluster is spread out to avoid single points of failure. For example:
A cluster on a single switch should have 3 nodes
A cluster spanning switches should be spread evenly across at least 3 switches
A cluster spanning networks should span at least 3 networks
A cluster spanning data centers should span at least 3 data centers
These rules will prevent split brain situations and ensure automatic failover works correctly.
"},{"location":"failover.html#use-an-arbitrator","title":"Use an arbitrator","text":"If it is too expensive to add a third node, switch, network, or datacenter, you should use an arbitrator. An arbitrator is a voting member of the cluster that can receive and relay replication, but it does not persist any data, and runs its own daemon instead of mysqld
. Placing even a single arbitrator in a 3rd location can add split brain protection to a cluster that is spread across only two nodes/locations.
It is important to note that the rule of 3s applies only to automatic failover. In the event of a 2-node cluster (or in the event of some other outage that leaves a minority of nodes active), the failure of one node will cause the other to become non-primary and refuse operations. However, you can recover the node from non-primary state using the following command:
SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n
This will tell the node (and all nodes still connected to its partition) that it can become a primary cluster. However, this is only safe to do when you are sure there is no other partition operating in primary as well, or else Percona XtraDB Cluster will allow those two partitions to diverge (and you will end up with two databases that are impossible to re-merge automatically).
For example, assume there are two data centers, where one is primary and one is for disaster recovery, with an even number of nodes in each. When an extra arbitrator node is run only in the primary data center, the following high availability features will be available:
Auto-failover of any single node or nodes within the primary or secondary data center
Failure of the secondary data center would not cause the primary to go down (because of the arbitrator)
Failure of the primary data center would leave the secondary in a non-primary state.
If a disaster-recovery failover has been executed, you can tell the secondary data center to bootstrap itself with a single command, but disaster-recovery failover remains in your control.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"faq.html","title":"Frequently asked questions","text":""},{"location":"faq.html#how-do-i-report-bugs","title":"How do I report bugs?","text":"All bugs can be reported on JIRA. Please submit error.log
files from all the nodes.
For auto-increment,\u00a0Percona XtraDB Cluster changes auto_increment_offset
for each new node. In a single-node workload, locking is handled in the same way as InnoDB. In case of write load on several nodes, Percona XtraDB Cluster uses optimistic locking and the application may receive lock error in response to COMMIT
query.
When a node crashes, after restarting, it will copy the whole dataset from another\u00a0node (if there were changes to data since the crash).
"},{"location":"faq.html#how-can-i-check-the-galera-node-health","title":"How can I check the Galera node health?","text":"To check the health of a Galera node, use the following query:
mysql> SELECT 1 FROM dual;\n
The following results of the previous query are possible:
You get the row with id=1
(node is healthy)
Unknown error (node is online, but Galera is not connected/synced with the cluster)
Connection error (node is not online)
You can also check a node\u2019s health with the clustercheck
script. First set up the clustercheck
user:
mysql> CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD\n'*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';\n
Expected output Query OK, 0 rows affected (0.00 sec)\n
mysql> GRANT PROCESS ON *.* TO 'clustercheck'@'localhost';\n
You can then check a node\u2019s health by running the clustercheck
script:
$ /usr/bin/clustercheck clustercheck password 0\n
If the node is running, you should get the following status:
HTTP/1.1 200 OK\nContent-Type: text/plain\nConnection: close\nContent-Length: 40\n\nPercona XtraDB Cluster Node is synced.\n
In case node isn\u2019t synced or if it is offline, status will look like:
HTTP/1.1 503 Service Unavailable\nContent-Type: text/plain\nConnection: close\nContent-Length: 44\n\nPercona XtraDB Cluster Node is not synced.\n
Note
The clustercheck
script has the following syntax:
<user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>
Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local
Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local
Percona XtraDB Cluster populates write set in memory before replication, and this sets the limit for the size of transactions that make sense. There are wsrep variables for maximum row count and maximum size of write set to make sure that the server does not run out of memory.
"},{"location":"faq.html#is-it-possible-to-have-different-table-structures-on-the-nodes","title":"Is it possible to have different table structures on the nodes?","text":"For example, if there are four nodes, with four tables: sessions_a
, sessions_b
, sessions_c
, and sessions_d
, and you want each table in a separate node, this is not possible for InnoDB tables. However, it will work for MEMORY tables.
The quorum mechanism in\u00a0Percona XtraDB Cluster will decide which nodes can accept traffic and will shut down the nodes that do not belong to the quorum. Later when the failure is fixed, the nodes will need to copy data from the working cluster.
The algorithm for quorum is Dynamic Linear Voting (DLV). The quorum is preserved if (and only if) the sum weight of the nodes in a new component strictly exceeds half that of the preceding Primary Component, minus the nodes which left gracefully.
The mechanism is described in detail in Galera documentation.
"},{"location":"faq.html#how-would-the-quorum-mechanism-handle-split-brain","title":"How would the quorum mechanism handle split brain?","text":"The quorum mechanism cannot handle split brain. If there is no way to decide on the primary component, Percona XtraDB Cluster has no way to resolve a split brain. The minimal recommendation is to have 3 nodes. However, it is possibile to allow a node to handle traffic with the following option:
wsrep_provider_options=\"pc.ignore_sb = yes\"\n
"},{"location":"faq.html#why-a-node-stops-accepting-commands-if-the-other-one-fails-in-a-2-node-setup","title":"Why a node stops accepting commands if the other one fails in a 2-node setup?","text":"This is expected behavior to prevent split brain. For more information, see previous question or Galera documentation.
"},{"location":"faq.html#is-it-possible-to-set-up-a-cluster-without-state-transfer","title":"Is it possible to set up a cluster without state transfer?","text":"It is possible in two ways:
By default, Galera reads starting position from a text file <datadir>/grastate.dat
. Make this file identical on all nodes, and there will be no state transfer after starting a node.
Use the wsrep_start_position
variable to start the nodes with the same UUID:seqno
value.
You may need to open up to four ports if you are using a firewall:
Regular MySQL port (default is 3306).
Port for group communication (default is 4567). It can be changed using the following option:
wsrep_provider_options =\"gmcast.listen_addr=tcp://0.0.0.0:4010; \"\n
Port for State Snaphot Transfer (default is 4444). It can be changed using the following option:
wsrep_sst_receive_address=10.11.12.205:5555\n
Port for Incremental State Transfer (default is port for group communication + 1 or 4568). It can be changed using the following option:
wsrep_provider_options = \"ist.recv_addr=10.11.12.206:7777; \"\n
Percona XtraDB Cluster does not support \u201casync\u201d mode, all commits are synchronous on all nodes. To be precise, the commits are \u201cvirtually\u201d synchronous, which means that the transaction should pass certification on nodes, not physical commit. Certification means a guarantee that the transaction does not have conflicts with other transactions on the corresponding node.
"},{"location":"faq.html#does-it-work-with-regular-mysql-replication","title":"Does it work with regular MySQL replication?","text":"Yes. On the node you are going to use as source, you should enable log-bin
and log-slave-update
options.
Try to disable SELinux with the following command:
$ echo 0 > /selinux/enforce\n
"},{"location":"faq.html#what-does-nc-invalid-option-d-in-the-ssterr-log-file-mean","title":"What does \u201cnc: invalid option \u2013 \u2018d\u2019\u201d in the sst.err log file mean?","text":"This error is specific to Debian and Ubuntu. Percona XtraDB Cluster uses netcat-openbsd
package. This dependency has been fixed. Future releases of Percona XtraDB Cluster will be compatible with any netcat
(see bug PXC-941).
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"garbd-howto.html","title":"Set up Galera arbitrator","text":"The size of a cluster increases when a node joins the cluster and decreases when a node leaves. A cluster reacts to replication problems with inconsistency voting. The size of the cluster determines the required votes to achieve a quorum. If a node no longer responds and is disconnected from the cluster the remaining nodes vote. The majority of the nodes that vote are considered to be in the cluster.
The arbitrator is important if you have an even number of nodes remaining in the cluster. The arbitrator keeps the number of nodes as an odd number, which avoids the split-brain situation.
A Galera Arbitrator is a lightweight member of a Percona XtraDB Cluster. This member can vote but does not do any replication and is not included in flow control calculations. The Galera Arbitrator is a separate daemon called garbd
. You can start this daemon separately from the cluster and run this daemon either as a service or from the shell. You cannot configure this daemon using the my.cnf
file.
Note
For more information on how to set up a cluster you can read in the Configuring Percona XtraDB Cluster on Ubuntu or Configuring Percona XtraDB Cluster on CentOS manuals.
"},{"location":"garbd-howto.html#installation","title":"Installation","text":"Galera Arbitrator does not need a dedicated server and can be installed on a machine running other applications. The server must have good network connectivity.
Galera Arbitrator can be installed from Percona\u2019s repository on Debian/Ubuntu distributions with the following command:
root@ubuntu:~# apt install percona-xtradb-cluster-garbd\n
Galera Arbitrator can be installed from Percona\u2019s repository on RedHat or derivative distributions with the following command:
[root@centos ~]# yum install percona-xtradb-cluster-garbd\n
"},{"location":"garbd-howto.html#start-garbd-and-configuration","title":"Start garbd
and configuration","text":"Note
On Percona XtraDB Cluster 8.0, SSL is enabled by default. To run the Galera Arbitrator, you must copy the SSL certificates and configure garbd
to use the certificates.
It is necessary to specify the cipher. In this example, it is AES128-SHA256
. If you do not specify the cipher, an error occurs with a \u201cTerminate called after throwing an instance of \u2018gnu::NotSet\u2019\u201d message.
For more information, see socket.ssl_cipher
When starting from the shell, you can set the parameters from the command line or edit the configuration file. This is an example of starting from the command line:
$ garbd --group=my_ubuntu_cluster \\\n--address=\"gcomm://192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\" \\\n--option=\"socket.ssl=YES; socket.ssl_key=/etc/ssl/mysql/server-key.pem; \\\nsocket.ssl_cert=/etc/ssl/mysql/server-cert.pem; \\\nsocket.ssl_ca=/etc/ssl/mysql/ca.pem; \\\nsocket.ssl_cipher=AES128-SHA256\"\n
To avoid entering the options each time you start garbd
, edit the options in the configuration file. To configure Galera Arbitrator on Ubuntu/Debian, edit the /etc/default/garb
file. On RedHat or derivative distributions, the configuration can be found in /etc/sysconfig/garb
file.
The configuration file should look like this after the installation and before you have added your parameters:
# Copyright (C) 2013-2015 Codership Oy\n# This config file is to be sourced by garb service script.\n\n# REMOVE THIS AFTER CONFIGURATION\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\n# GALERA_NODES=\"\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\n# GALERA_GROUP=\"\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"\"\n
Add the parameter information about the cluster. For this document, we use the cluster information from Configuring Percona XtraDB Cluster on Ubuntu.
Note
Please note that you need to remove the # REMOVE THIS AFTER CONFIGURATION
line before you can start the service.
# This config file is to be sourced by garb service script.\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\nGALERA_NODES=\"192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\nGALERA_GROUP=\"my_ubuntu_cluster\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"socket.ssl_cert=/etc/ssl/mysql/server-key.pem;socket./etc/ssl/mysql/server-key.pem\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"/var/log/garbd.log\"\n
You can now start the Galera Arbitrator daemon (garbd
) by running:
root@server:~# service garbd start\n
Expected output [ ok ] Starting /usr/bin/garbd: :.\n
Note
On systems that run systemd
as the default system and service manager, use systemctl
instead of service
to invoke the command. Currently, both are supported.
root@server:~# systemctl start garb\n
root@server:~# service garb start\n
Expected output [ ok ] Starting /usr/bin/garbd: :.\n
Additionally, you can check the arbitrator
status by running:
root@server:~# service garbd status\n
Expected output [ ok ] garb is running.\n
root@server:~# service garb status\n
Expected output [ ok ] garb is running.\n
"},{"location":"garbd-howto.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"gcache-record-set-cache-difference.html","title":"Understand GCache and Record-Set cache","text":"In Percona XtraDB Cluster, there is a concept of GCache and Record-Set cache (which can also be called transaction write-set cache). The use of these two caches is often confusing if you are running long transactions, because both of them result in the creation of disk-level files. This manual describes what their main differences are.
"},{"location":"gcache-record-set-cache-difference.html#record-set-cache","title":"Record-Set cache","text":"When you run a long-running transaction on any particular node, it will try to append a key for each row that it tries to modify (the key is a unique identifier for the row {db,table,pk.columns}
). This information is cached in out-write-set, which is then sent to the group for certification.
Keys are cached in HeapStore (which has page-size=64K
and total-size=4MB
). If the transaction data-size outgrows this limit, then the storage is switched from Heap to Page (which has page-size=64MB
and total-limit=free-space-on-disk
).
All these limits are non-configurable, but having a memory-page size greater than 4MB per transaction can cause things to stall due to memory pressure, so this limit is reasonable. This is another limitation to address when Galera supports large transaction.
The same long-running transaction will also generate binlog data that also appends to out-write-set on commit (HeapStore->FileStore
). This data can be significant, as it is a binlog image of rows inserted/updated/deleted by the transaction. The wsrep_max_ws_size
variable controls the size of this part of the write-set. The threshold doesn\u2019t consider size allocated for caching-keys and the header.
If FileStore
is used, it creates a file on the disk (with names like xxxx_keys
and xxxx_data
) to store the cache data. These files are kept until a transaction is committed, so the lifetime of the transaction is linked.
When the node is done with the transaction and is about to commit, it will generate the final-write-set using the two files (if the data size grew enough to use FileStore
) plus HEADER
, and will publish it for certification to cluster.
The native node executing the transaction will also act as subscription node, and will receive its own write-set through the cluster publish mechanism. This time, the native node will try to cache write-set into its GCache. How much data GCache retains is controlled by the GCache configuration.
"},{"location":"gcache-record-set-cache-difference.html#gcache","title":"GCache","text":"GCache holds the write-set published on the cluster for replication. The lifetime of write-set in GCache is not transaction-linked.
When a JOINER
node needs an IST, it will be serviced through this GCache (if possible).
GCache will also create the files to disk. You can read more about it here.
At any given point in time, the native node has two copies of the write-set: one in GCache and another in Record-Set Cache.
For example, lets say you INSERT/UPDATE
2 million rows in a table with the following schema.
(int, char(100), char(100) with pk (int, char(100))\n
It will create write-set key/data files in the background similar to the following:
-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000000\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000001\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000002\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_keys.000000\n
"},{"location":"gcache-record-set-cache-difference.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"gcache-write-set-cache-encryption.html","title":"GCache encryption and Write-Set cache encryption","text":"These features are tech preview. Before using these features in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.
"},{"location":"gcache-write-set-cache-encryption.html#gcache-and-write-set-cache-encryption","title":"GCache and Write-Set cache encryption","text":"Enabling this feature encrypts the Galera GCache and Write-Set cache files with a File Key.
GCache has a RingBuffer on-disk file to manage write-sets. The keyring only stores the Master Key which is used to encrypt the File Key used by the RingBuffer file. The encrypted File Key is stored in the RingBuffer\u2019s preamble. The RingBuffer file of GCache is non-volatile, which means this file survives a restart. The File Key is not stored for GCache off-pages and Write-Set cache files.
See also
For more information, see Understanding GCache and Record-set Cache, and the Percona Database Performance Blog: All you need to know about GCache
Sample preamble key-value pairsVersion: 2\nGID: 3afaa71d-6665-11ed-98de-2aba4aabc65e\nsynced: 0\nenc_version: 1\nenc_encrypted: 1\nenc_mk_id: 3\nenc_mk_const_id: 3ad045a2-6665-11ed-a49d-cb7b9d88753f\nenc_mk_uuid: 3ad04c8e-6665-11ed-a947-c7e346da147f\nenc_fk_id: S4hRiibUje4v5GSQ7a+uuS6NBBX9+230nsPHeAXH43k=\nenc_crc: 279433530\n
"},{"location":"gcache-write-set-cache-encryption.html#key-descriptions","title":"Key descriptions","text":"The following table describes the encryption keys defined in the preamble. All other keys in the preamble are not related to encryption.
Key Descriptionenc_version
The encryption version enc_encrypted
If the GCache is encrypted or not enc_mk_id
A part of the Master Key ID. Rotating the Master Key increments the sequence number. enc_mk_const_id
A part of the Master Key ID, a constant Universally unique identifier (UUID). This option remains constant for the duration of the galera.gcache
file and simplifies matching the Masater Key inside the keyring to the instance that generated the keys. Deleting the galera.gcache
changes the value of this key. enc_mk_uuid
The first Master Key or if Galera detects that the preamble is inconsistent, which causes a full GCache reset and a new Master Key is required, generates this UUID. enc_fk_id
The File Key ID encrypted with the Master Key. enc_crc
The cyclic redundancy check (CRC) calculated from all encryption-related keys."},{"location":"gcache-write-set-cache-encryption.html#controlling-encryption","title":"Controlling encryption","text":"Encryption is controlled using the wsrep_provider_options.
Variable name Default value Allowed valuesgcache.encryption
off on/off gcache.encryption_cache_page_size
32KB 2-512 gcache.encryption_cache_size
16MB 2 - 512 allocator.disk_pages_encryption
off on/off allocator.encryption_cache_page_size
32KB allocator.encryption_cache_size
16MB"},{"location":"gcache-write-set-cache-encryption.html#rotate-the-gcache-master-key","title":"Rotate the GCache Master Key","text":"GCache and Write-Set cache encryption uses either a keyring plugin or a keyring component. This plugin or component must be loaded.
Store the keyring file outside the data directory when using a keyring plugin or a keyring component.
mysql> ALTER INSTANCE ROTATE GCACHE MASTER KEY;\n
"},{"location":"gcache-write-set-cache-encryption.html#variable-descriptions","title":"Variable descriptions","text":""},{"location":"gcache-write-set-cache-encryption.html#gcache-encryption","title":"GCache encryption","text":"The following sections describe the variables related to GCache encryption. All variables are read-only.
"},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption","title":"gcache.encryption","text":"Enable or disable GCache cache encryption.
"},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_page_size","title":"gcache.encryption_cache_page_size","text":"The size of the GCache encryption page. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.
"},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_size","title":"gcache.encryption_cache_size","text":"Every encrypted file has an encryption.cache, which consists of pages. Use gcache.encryption_cache_size
to configure the encryption.cache size.
Configure the page size in the cache with gcache.encryption_cache_page_size
.
The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.
The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.
"},{"location":"gcache-write-set-cache-encryption.html#write-set-cache-encryption","title":"Write-Set cache encryption","text":"The following sections describe the variables related to Write-Set cache encryption. All variables are read-only.
"},{"location":"gcache-write-set-cache-encryption.html#allocatordisk_pages_encryption","title":"allocator.disk_pages_encryption","text":"Enable or disable the Write-Set cache encryption.
"},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_page_size","title":"allocator.encryption_cache_page_size","text":"The size of the encryption cache for Write-Set pages. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.
"},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_size","title":"allocator.encryption_cache_size","text":"Every Write-Set encrypted file has an encryption.cache, which consists of pages. Use allocator.encryption_cache_size
to configure the size of the encryption.cache
.
Configure the page size in the cache with allocator.encryption_cache_page_size
.
The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.
The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.
"},{"location":"gcache-write-set-cache-encryption.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"get-started-cluster.html","title":"Get started with Percona XtraDB Cluster","text":"This guide describes the procedure for setting up Percona XtraDB Cluster.
Examples provided in this guide assume there are three Percona XtraDB Cluster nodes, as a common choice for trying out and testing:
Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63Note
Avoid creating a cluster with two or any even number of nodes, because this can lead to split brain.
The following procedure provides an overview with links to details for every step:
It is recommended to install from official Percona repositories:
On Red Hat and CentOS, install using YUM.
On Debian and Ubuntu, install using APT.
Configure all nodes with relevant settings required for write-set replication.
This includes path to the Galera library, location of other nodes, etc.
This must be the node with your main database, which will be used as the data source for the cluster.
Data on new nodes joining the cluster is overwritten in order to synchronize it with the cluster.
Although cluster initialization and node provisioning is performed automatically, it is a good idea to ensure that changes on one node actually replicate to other nodes.
To complete the deployment of the cluster, a high-availability proxy is required. We recommend installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries.
"},{"location":"get-started-cluster.html#percona-monitoring-and-management","title":"Percona Monitoring and Management","text":"Percona Monitoring and Management is the best choice for managing and monitoring Percona XtraDB Cluster performance. It provides visibility for the cluster and enables efficient troubleshooting.
"},{"location":"get-started-cluster.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#frm","title":".frm","text":"For each table, the server will create a file with the .frm
extension containing the table definition (for all storage engines).
An acronym for Atomicity
, Consistency
, Isolation
, Durability
.
Asynchronous replication is a technique where data is first written to the primary node. After the primary acknowledges the write, the data is written to secondary nodes.
"},{"location":"glossary.html#atomicity","title":"Atomicity","text":"This property guarantees that all updates of a transaction occur in the database or no updates occur. This guarantee also applies with a server exit. If a transaction fails, the entire operation rolls back.
"},{"location":"glossary.html#cluster-replication","title":"Cluster replication","text":"Normal replication path for cluster members.\u00a0Can be encrypted (not by default) and unicast or multicast (unicast by default). Runs on tcp port 4567 by default.
"},{"location":"glossary.html#consistency","title":"Consistency","text":"This property guarantees that each transaction that modifies the database takes it from one consistent state to another. Consistency is implied with Isolation.
"},{"location":"glossary.html#datadir","title":"datadir","text":"The directory in which the database server stores its databases. Most Linux distribution use /var/lib/mysql
by default.
The node elected to provide a state transfer (SST or IST).
"},{"location":"glossary.html#durability","title":"Durability","text":"Once a transaction is committed, it will remain so and is resistant to a server exit.
"},{"location":"glossary.html#foreign-key","title":"Foreign Key","text":"A referential constraint between two tables. Example: A purchase order in the purchase_orders table must have been made by a customer that exists in the customers table.
"},{"location":"glossary.html#general-availability-ga","title":"General availability (GA)","text":"A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.
"},{"location":"glossary.html#gtid","title":"GTID","text":"Global Transaction ID, in Percona XtraDB Cluster it consists of UUID
and an ordinal sequence number which denotes the position of the change in the sequence.
HAProxy
is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the net.
Default prefix for tablespace files, e.g., ibdata1
is a 10MB auto-extendable file that MySQL creates for the shared tablespace by default.
The Isolation guarantee means that no transaction can interfere with another. When transactions access data in a session, they also lock that data to prevent other operations on that data by other transaction.
"},{"location":"glossary.html#ist","title":"IST","text":"Incremental State Transfer. Functionality which instead of whole state snapshot can catch up with the group by receiving the missing writesets, but only if the writeset is still in the donor\u2019s writeset cache.
"},{"location":"glossary.html#innodb","title":"InnoDB","text":"Storage Engine
for MySQL and derivatives (Percona Server
, MariaDB
) originally written by Innobase Oy, since acquired by Oracle. It provides ACID
compliant storage engine with foreign key
support. InnoDB is the default storage engine on all platforms.
Jenkins is a continuous integration system that we use to help ensure the continued quality of the software we produce. It helps us achieve the aims of: * no failed tests in trunk on any platform * aid developers in ensuring merge requests build and test on all platforms * no known performance regressions (without a damn good explanation)
"},{"location":"glossary.html#joiner-node","title":"joiner node","text":"The node joining the cluster, usually a state transfer target.
"},{"location":"glossary.html#lsn","title":"LSN","text":"Log Serial Number. A term used in relation to the InnoDB
or XtraDB
storage engines. There are System-level LSNs and Page-level LSNs. The System LSN represents the most recent LSN value assigned to page changes. Each InnoDB page contains a Page LSN which is the max LSN for that page for changes that reside on the disk. This LSN is updated when the page is flushed to disk.
A fork of MySQL
that is maintained primarily by Monty Program AB. It aims to add features, fix bugs while maintaining 100% backwards compatibility with MySQL.
This file refers to the database server\u2019s main configuration file. Most Linux distributions place it as /etc/mysql/my.cnf
or /etc/my.cnf
, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.
A MySQL
Storage Engine
that was the default until MySQL 5.5. It doesn\u2019t fully support transactions but in some scenarios may be faster than InnoDB
. Each table is stored on disk in 3 files: .frm
,i .MYD
, .MYI
.
An open source database that has spawned several distributions and forks. MySQL AB was the primary maintainer and distributor until bought by Sun Microsystems, which was then acquired by Oracle. As Oracle owns the MySQL trademark, the term MySQL is often used for the Oracle distribution of MySQL as distinct from the drop-in replacements such as MariaDB
and Percona Server
.
This user is used by the SST process to run the SQL commands needed for SST
, such as creating the mysql.pxc.sst.user
and assigning it the role mysql.pxc.sst.role
.
This role has all the privileges needed to run xtrabackup to create a backup on the donor node.
"},{"location":"glossary.html#mysqlpxcsstuser","title":"mysql.pxc.sst.user","text":"This user (set up on the donor node) is assigned the mysql.pxc.sst.role
and runs the XtraBackup to make backups. The password for this is randomly generated for each SST. The password is generated automatically for each SST
.
A cluster node \u2013 a single mysql instance that is in the cluster.
"},{"location":"glossary.html#numa","title":"NUMA","text":"Non-Uniform Memory Access (NUMA
) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. The whole system may still operate as one unit, and all memory is basically accessible from everywhere, but at a potentially higher latency and lower performance.
Percona\u2019s branch of MySQL
with performance and management improvements.
Percona XtraDB Cluster (PXC) is a high availability solution for MySQL.
"},{"location":"glossary.html#primary-cluster","title":"primary cluster","text":"A cluster with quorum.\u00a0A non-primary cluster will not allow any operations and will give Unknown command
errors on any clients attempting to read or write from the database.
A majority (> 50%) of nodes.\u00a0In the event of a network partition, only the cluster partition that retains a quorum (if any) will remain Primary by default.
"},{"location":"glossary.html#split-brain","title":"split brain","text":"Split brain occurs when two parts of a computer cluster are disconnected, each part believing that the other is no longer running. This problem can lead to data inconsistency.
"},{"location":"glossary.html#sst","title":"SST","text":"State Snapshot Transfer is the full copy of data from one node to another. It\u2019s used when a new node joins the cluster, it has to transfer data from an existing node. Percona XtraDB Cluster: uses the xtrabackup
program for this purpose. xtrabackup
does not require READ LOCK
for the entire syncing process - only for syncing the MySQL system tables and writing the information about the binlog, galera and replica information (same as the regular Percona XtraBackup backup).
The SST method is configured with the wsrep_sst_method
variable.
In PXC 8.0, the mysql-upgrade command is now run automatically as part of SST
. You do not have to run it manually when upgrading your system from an older version.
A Storage Engine
is a piece of software that implements the details of data storage and retrieval for a database system. This term is primarily used within the MySQL
ecosystem due to it being the first widely used relational database to have an abstraction layer around storage. It is analogous to a Virtual File System layer in an Operating System. A VFS layer allows an operating system to read and write multiple file systems (for example, FAT, NTFS, XFS, ext3) and a Storage Engine layer allows a database server to access tables stored in different engines (e.g. MyISAM
, InnoDB).
A tech preview item can be a feature, a variable, or a value within a variable. The term designates that the item is not yet ready for production use and is not included in support by SLA. A tech preview item is included in a release so that users can provide feedback. The item is either updated and released as general availability(GA) or removed if not useful. The item\u2019s functionality can change from tech preview to GA.
"},{"location":"glossary.html#uuid","title":"UUID","text":"Universally Unique IDentifier which uniquely identifies the state and the sequence of changes node undergoes. 128-bit UUID is a classic DCE UUID Version 1 (based on current time and MAC address). Although in theory this UUID could be generated based on the real MAC-address, in the Galera it is always (without exception) based on the generated pseudo-random addresses (\u201clocally administered\u201d bit in the node address (in the UUID structure) is always equal to unity).
"},{"location":"glossary.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"haproxy-config.html","title":"HAProxy configuration file","text":""},{"location":"haproxy-config.html#example-of-haproxy-v1-configuration-file","title":"Example of HAProxy v1 configuration file","text":"HAProxy v1 configuration fileglobal\n log 127.0.0.1 local0\n log 127.0.0.1 local1 notice\n maxconn 4096\n uid 99\n gid 99\n daemon\n #debug\n #quiet\n\ndefaults\n log global\n mode http\n option tcplog\n option dontlognull\n retries 3\n redispatch\n maxconn 2000\n contimeout 5000\n clitimeout 50000\n srvtimeout 50000\n timeout connect 160000\n timeout client 240000\n timeout server 240000\n\nlisten mysql-cluster 0.0.0.0:3306\n mode tcp\n balance roundrobin\n option mysql-check user root\n\n server db01 10.4.29.100:3306 check\n server db02 10.4.29.99:3306 check\n server db03 10.4.29.98:3306 check\n
Options set in the configuration file
"},{"location":"haproxy-config.html#differences-between-version-1-configuration-file-and-version-2-configuration-file","title":"Differences between version 1 configuration file and version 2 configuration file","text":""},{"location":"haproxy-config.html#version-declaration","title":"Version Declaration:","text":"v1: The configuration file typically omits an explicit version declaration.
v2: You must explicitly declare the version using the version keyword followed by the specific version number (e.g., version = 2.0).
"},{"location":"haproxy-config.html#global-parameters","title":"Global Parameters:","text":"v1 and v2: Both versions utilize a global section to define global parameters, but certain parameters might have different names or functionalities across versions. Refer to the official documentation for specific changes.
"},{"location":"haproxy-config.html#configuration-blocks","title":"Configuration Blocks:","text":"v1 and v2: Both versions use a similar indentation-based structure to define configuration blocks like frontend and backend. However, v2 introduces new blocks and keywords not present in v1 (e.g., process, http-errors).
"},{"location":"haproxy-config.html#directives","title":"Directives:","text":"v1 and v2: While many directives remain consistent, some might have renamed keywords, altered syntax, or entirely new functionalities in v2. Consult the official documentation for a comprehensive comparison of directives and their usage between versions.
"},{"location":"haproxy-config.html#comments","title":"Comments:","text":"v1 and v2: Both versions support comments using the # symbol. However, v2 introduces multi-line comments using / \u2026 / syntax, which v1 does not support.
"},{"location":"haproxy-config.html#version-2-configuration-file","title":"Version 2 configuration file","text":"This simplified example is for load balancing. HAProxy offers numerous features for advanced configurations and fine-tuning.
This example demonstrates a basic HAProxy v2 configuration file for load-balancing HTTP traffic across two backend servers.
"},{"location":"haproxy-config.html#global-section","title":"Global Section","text":"The following settings are defined in the Global section:
The maximum number of concurrent connections allowed by HAProxy.
The user and group under which HAProxy should run.
A UNIX socket for accessing HAProxy statistics.
In the defaults
block, we set the operating mode to TCP and define option tcpka
global\n maxconn 4000 # Maximum concurrent connections (adjust as needed)\n user haproxy # User to run HAProxy process\n group haproxy # Group to run HAProxy process\n stats socket /var/run/haproxy.sock mode 666 level admin\n\ndefaults\n mode tcp # Set operating mode to TCP\n #option tcpka\n
"},{"location":"haproxy-config.html#frontend-section","title":"Frontend Section","text":"The following settings are defined in this section:
Create a frontend named \u201cwebserver\u201d that listens on port 80 for incoming HTTP requests.
Enable the httpclose
option to terminate idle client connections efficiently.
Specify the default backend for this frontend.
frontend gr-prod-rw\n bind 0.0.0.0:3307 \n mode tcp\n option contstats\n option dontlognull\n option clitcpka\n default_backend gr-prod-rw\n
You should add the following options:
option Descriptioncontstats
Provides continuous updates to the statistics of your connections. This option ensures that your traffic counters are updated in real-time, rather than only after a connection closes, giving you a more accurate and immediate view of your traffic patterns. dontlognull
Does not log requests that don\u2019t transfer any data, like health check pings. clitcpka
Configures TCP keepalive settings for client connections. This option allows the operating system to detect and terminate inactive connections, even if HAProxy isn\u2019t actively checking them."},{"location":"haproxy-config.html#backend-section","title":"Backend Section","text":"In this section, you specify the backend servers that will handle requests forwarded by the frontend. List each server with their respective IP addresses, ports, and weights.
You set up a health check with check inter 10000
. This option means that HAProxy performs a health check on each server every 10,000 milliseconds or 10 seconds. If a server fails a health check, it is temporarily removed from the pool until it passes subsequent checks, ensuring smooth and reliable client service. This proactive monitoring is crucial for maintaining an efficient and uninterrupted backend service.
Set the number of retries to put the service down and up. For example, you set the rise
parameter to 1
, which means the server only needs to pass one health check before the server is considered healthy again. The fall
parameter is set to 2
, requiring two consecutive failed health checks before the server is marked as unhealthy.
The weight 50 backup
setting is crucial for load balancing; this setting determines that this server only receives traffic if the primary servers are down. The weight of 50 indicates the relative amount of traffic the server will handle compared to other servers in the backup role. This method ensures the server can handle a significant load even in backup mode, but not as much as a primary server.
The following example lists these options. Replace the server details (IP addresses, ports) with your backend server information. Adjust weights and other options according to your specific needs and server capabilities.
backend servers\n server server1 10.0.68.39:3307 check inter 10000 rise 1 fall 2 weight 50\n server server1 10.0.68.74:3307 check inter 10000 rise 1 fall 2 weight 50 backup\n server server1 10.0.68.20:3307 check inter 10000 rise 1 fall 2 weight 1 backup\n
More information about how to configure HAProxy
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"haproxy.html","title":"Load balancing with HAProxy","text":"The free and open source software, HAProxy, provides a high-availability load balancer and reverse proxy for TCP and HTTP-based applications. HAProxy can distribute requests across multiple servers, ensuring optimal performance and security.
Here are the benefits of using HAProxy:
HAProxy supports layer 4 (TCP) and layer 7 (HTTP) load balancing, which means it can handle different network traffic and protocols. HAProxy requires patched backends to tunnel IP traffic in layer 4 load-balancing tunnel mode. This mode also disables some layer 7 advanced features.
HAProxy has rich features, such as URL rewriting, SSL/TLS termination, gzip compression, caching, observability, health checks, retries, circuit breakers, WebSocket, HTTP/2 and HTTP/3 support, and more.
HAProxy has a reputation for being fast and efficient in terms of processor and memory usage. The software is written in C and has an event-driven and multithreaded architecture.
HAProxy has a user-friendly status page that shows detailed information about the load balancer and the backends. The software also integrates well with third-party monitoring tools and services.
HAProxy supports session retention and cookie guidance, which can help with sticky sessions and affinity.
Access the server as a user with administrative privileges, either root
or use sudo.
Create a Dedicated HAProxy user account for HAProxy to interact with your MySQL instance. This account enhances security.
Make the following changes to the example CREATE USER
command to replace the placeholders:
Replace haproxy_user with your preferred username.
Substitute haproxy_server_ip
with the actual IP address of your HAProxy server.
Choose a robust password for the \u2018strong_password\u2019.
Execute the following command:
mysql> CREATE USER 'haproxy_user'@'haproxy_server_ip' IDENTIFIED BY 'strong_password';\n
Grant the minimal set of privileges necessary for HAProxy to perform its health checks and monitoring.
Execute the following:
GRANT SELECT ON `mysql`.* TO 'haproxy_user'@'haproxy_server_ip';\nFLUSH PRIVILEGES;\n
"},{"location":"haproxy.html#important-considerations","title":"Important Considerations","text":"If your MySQL servers are part of a replication cluster, create the user and grant privileges on each node to ensure consistency.
For enhanced security, consider restricting the haproxy_user
to specific databases or tables to monitor rather than granting permissions to the entire mysql
database schema.
Add the HAProxy Enterprise repository to your system by following the instructions for your operating system.
Install HAProxy on the node you intend to use for load balancing. You can install it using the package manager.
On a Debian-derived distributionOn a Red Hat-derived distribution$ sudo apt update\n$ sudo apt install haproxy\n
$ sudo yum update\n$ sudo yum install haproxy\n
To start HAProxy, use the haproxy
command. You may pass any number of configuration parameters on the command line. To use a configuration file, add the -f
option.
$ # Passing one configuration file\n$ sudo haproxy -f haproxy-1.cfg\n\n$ # Passing multiple configuration files\n$ sudo haproxy -f haproxy-1.cfg haproxy-2.cfg\n\n$ # Passing a directory\n$ sudo haproxy -f conf-dir\n
You can pass the name of an existing configuration file or a directory. HAProxy includes all files with the .cfg extension in the supplied directory. Another way to pass multiple files is to use -f
multiple times.
For more information, see HAProxy Management Guide
For information, see HAProxy configuration file
Important
In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password
. HAProxy does not support this authentication plugin. Create a mysql user using the mysql_native_password
authentication plugin.
mysql> CREATE USER 'haproxy_user'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\n
See also
MySQL Documentation: CREATE USER statement
"},{"location":"haproxy.html#uninstall","title":"Uninstall","text":"To uninstall haproxy version 2 from a Linux system, follow the latest instructions.
"},{"location":"haproxy.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"high-availability.html","title":"High availability","text":"In a basic setup with three nodes, if you take any of the nodes down, Percona XtraDB Cluster continues to function. At any point, you can shut down any node to perform maintenance or configuration changes.
Even in unplanned situations (like a node crashing or if it becomes unavailable over the network), you can run queries on working nodes. If a node is down and the data has changed, there are two methods that the node may use when it joins the cluster again:
Method What happens Description SST The joiner node receives a full copy of the database state from the donor node. You initiate a Solid State Transfer (SST) when adding a new node to a Galera cluster or when a node has fallen too far out of sync IST Only incremental changes are copied from one node to another. This operation can be used when a node is down for a short period."},{"location":"high-availability.html#sst","title":"SST","text":"The primary benefit of SST is that it ensures data consistency across the cluster by providing a complete snapshot of the database at a point in time. However, SST can be resource-intensive and time-consuming if the operation transfers significant data. The donor node is locked during this transfer, impacting cluster performance.
You initiate a state snapshot transfer (SST) when a node joins a cluster without the complete data set. This process involves transferring a full data copy from one node to another, ensuring that the joining node has an exact replica of the cluster\u2019s current state. Technically, SST is performed by halting the donor node\u2019s database operations momentarily to create a consistent snapshot of its data. The snapshot is then transferred over the network to the joining node, which applies it to its database system.
Even without locking your cluster in a read-only state, SST may be intrusive and disrupt the regular operation of your services. IST avoids disruption. A node fetches only the changes that happened while that node was unavailable. IST uses a caching mechanism on nodes.
"},{"location":"high-availability.html#ist","title":"IST","text":"Incremental State Transfer (IST) is a method that allows a node to request only the missing transactions from another node in the cluster. This process is beneficial because it reduces the amount of data that must be transferred, leading to faster recovery times for nodes that are out of sync. Additionally, IST minimizes the network bandwidth required for state transfer, which is particularly advantageous in environments with limited resources.
However, there are drawbacks to consider. Reliance on another node\u2019s state means that an SST operation is necessary if no node in the cluster has the required information.
When a node joins the cluster with a state slightly behind the current cluster state, IST does not require the joining node to copy the entire database state. Technically, IST transfers only the missing write-sets that the joining node needs to catch up with the cluster. The donor node, the node with the most recent state, sends the write-sets to the joining node through a dedicated channel. The joining node then applies these write-sets to its database state incrementally until it synchronizes with the cluster\u2019s current state. The donor node can experience a performance impact during an IST operation, typically less severe than during SST.
"},{"location":"high-availability.html#monitor-the-node-state","title":"Monitor the node state","text":"The wsrep_state_comment
variable returns the current state of a Galera node in the cluster, providing information about the node\u2019s role and status. The value can vary depending on the specific state of the Galera node, such as the following:
\u201cSynced\u201d
\u201cDonor/Desynced\u201d
\u201cDonor/Joining\u201d
\u201cJoined\u201d
You can monitor the current state of a node using the following command:
mysql> SHOW STATUS LIKE 'wsrep_local_state_comment';\n
If the node is in Synced (6)
state, that node is part of the cluster and can handle the traffic.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"install-index.html","title":"Install Percona XtraDB Cluster","text":"Install Percona XtraDB Cluster on all hosts that you are planning to use as cluster nodes and ensure that you have root access to the MySQL server on each one.
We gather Telemetry data in the Percona packages and Docker images.
"},{"location":"install-index.html#ports-required","title":"Ports required","text":"Open specific ports for the Percona XtraDB Cluster to function correctly.
Port 3306 is the default port for MySQL. This port facilitates communication and data transfer between nodes and applications.
Port 4567 is used for Galera replication traffic, which is vital for synchronizing data across the cluster nodes.
Port 4568 is used for Incremental State Transfer (IST), allowing nodes to transfer only the missing blocks of data.
Port 4444 is for State Snapshot Transfer (SST), which involves a complete data snapshot transfer from one node to another.
Port 9200 if you use Percona Monitoring and Management (PMM) for cluster monitoring.
We recommend installing Percona XtraDB Cluster from official Percona software repositories using the corresponding package manager for your system:
Debian or Ubuntu
Red Hat or CentOS
Important
After installing Percona XtraDB Cluster, the mysql
service is stopped but enabled so that it may start the next time you restart the system. The service starts if the the grastate.dat
file exists and the value of seqno
is not -1.
See also
More information about Galera state information in Index of files created by PXC grastat.dat
"},{"location":"install-index.html#installation-alternatives","title":"Installation alternatives","text":"Percona also provides a generic tarball with all required files and binaries for manual installation:
If you want to build Percona XtraDB Cluster from source, see Compiling and Installing from Source Code.
If you want to run Percona XtraDB Cluster using Docker, see Running Percona XtraDB Cluster in a Docker Container.
"},{"location":"install-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"intro.html","title":"About Percona XtraDB Cluster","text":"Percona XtraDB Cluster is a fully open-source high-availability solution for MySQL. It integrates Percona Server for MySQL and Percona XtraBackup with the Galera library to enable synchronous multi-source replication.
A cluster consists of nodes, where each node contains the same set of data synchronized accross nodes. The recommended configuration is to have at least 3 nodes, but you can have 2 nodes as well. Each node is a regular MySQL Server instance (for example, Percona Server). You can convert an existing MySQL Server instance to a node and run the cluster using this node as a base. You can also detach any node from the cluster and use it as a regular MySQL Server instance.
"},{"location":"intro.html#benefits","title":"Benefits","text":"When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access.
No central management. You can loose any node at any point of time, and the cluster will continue to function without any data loss.
Good solution for scaling a read workload. You can put read queries to any of the nodes.
Overhead of provisioning new node. When you add a new node, it has to copy the full data set from one of existing nodes. If it is 100 GB, it copies 100 GB.
This can\u2019t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes versus all traffic to 1 node, but you can\u2019t expect a lot. All writes still have to go on all nodes.
You have several duplicates of data: for 3 nodes you have 3 duplicates.
Percona XtraDB Cluster https://www.percona.com/software/mysql-database/percona-xtradb-cluster is based on Percona Server for MySQL running with the XtraDB storage engine. It uses the Galera library, which is an implementation of the write set replication (wsrep) API developed by Codership Oy. The default and recommended data transfer method is via Percona XtraBackup .
"},{"location":"intro.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"limitation.html","title":"Percona XtraDB Cluster limitations","text":"The following limitations apply to Percona XtraDB Cluster:
Replication works only with InnoDB storage engine.
Any writes to tables of other types are not replicated.
Unsupported queries:
LOCK TABLES
and UNLOCK TABLES
is not supported in multi-source setups
Lock functions, such as GET_LOCK()
, RELEASE_LOCK()
, and so on
Query log cannot be directed to table.
If you enable query logging, you must forward the log to a file:
log_output = FILE\n
Use general_log
and general_log_file
to choose query logging and the log file name.
Maximum allowed transaction size is defined by the wsrep_max_ws_rows
and wsrep_max_ws_size
variables.
LOAD DATA INFILE
processing will commit every 10 000 rows. So large transactions due to LOAD DATA
will be split to series of small transactions.
Transaction issuing COMMIT
may still be aborted at that stage.
Due to cluster-level optimistic concurrency control, there can be two transactions writing to the same rows and committing in separate Percona XtraDB Cluster nodes, and only one of them can successfully commit. The failing one will be aborted. For cluster-level aborts, Percona XtraDB Cluster gives back deadlock error code:
Error message(Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).\n
XA transactions are not supported
Due to possible rollback on commit.
Write throughput of the whole cluster is limited by the weakest node.
If one node becomes slow, the whole cluster slows down. If you have requirements for stable high performance, then it should be supported by corresponding hardware.
Minimal recommended size of cluster is 3 nodes.
The 3rd node can be an arbitrator.
enforce_storage_engine=InnoDB
is not compatible with wsrep_replicate_myisam=OFF
wsrep_replicate_myisam
is set to OFF
by default.
Avoid ALTER TABLE ... IMPORT/EXPORT
workloads when running Percona XtraDB Cluster in cluster mode.
It can lead to node inconsistency if not executed in sync on all nodes.
All tables must have a primary key.
This ensures that the same rows appear in the same order on different nodes. The DELETE
statement is not supported on tables without a primary key.
See also
Galera Documentation: Tables without Primary Keys
Avoid reusing the names of persistent tables for temporary tables
Although MySQL does allow having temporary tables named the same as persistent tables, this approach is not recommended.
Galera Cluster blocks the replication of those persistent tables the names of which match the names of temporary tables.
With wsrep_debug set to 1, the error log may contain the following message:
Error message... [Note] WSREP: TO BEGIN: -1, 0 : create table t (i int) engine=innodb\n... [Note] WSREP: TO isolation skipped for: 1, sql: create table t (i int) engine=innodb.Only temporary tables affected.\n
See also
MySQL Documentation: Problems with temporary tables
As of version 8.0.21, an INPLACE ALTER TABLE query takes an internal shared lock on the table during the execution of the query. The LOCK=NONE
clause is no longer allowed for all of the INPLACE ALTER TABLE queries due to this change.
This change addresses a deadlock, which could cause a cluster node to hang in the following scenario:
An INPLACE ALTER TABLE
query in one session or being applied as Total Order Isolation (TOI)
A DML on the same table from another session
Do not use one or more dot characters (.) when defining the values for the following variables:
log_bin
log_bin_index
MySQL and XtraBackup handles the value in different ways and this difference causes unpredictable behavior.
"},{"location":"limitation.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"load-balance-proxysql.html","title":"Load balance with ProxySQL","text":"ProxySQL is a high-performance SQL proxy. ProxySQL runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.
The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.
The proxy is designed to run continuously without needing to be restarted. Most configuration can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.
See also
ProxySQL Documentation
ProxySQL v2 natively supports Percona XtraDB Cluster. With this version, proxysql-admin
tool does not require any custom scripts to keep track of Percona XtraDB Cluster status.
Important
In version 8.0, Percona XtraDB Cluster does not support ProxySQL v1.
"},{"location":"load-balance-proxysql.html#manual-configuration","title":"Manual configuration","text":"This section describes how to configure ProxySQL with three Percona XtraDB Cluster nodes.
Node Host Name IP address Node 1 pxc1 192.168.70.71 Node 2 pxc2 192.168.70.72 Node 3 pxc3 192.168.70.73 Node 4 proxysql 192.168.70.74ProxySQL can be configured either using the /etc/proxysql.cnf
file or through the admin interface. The admin interface is recommended because this interface can dynamically change the configuration without restarting the proxy.
To connect to the ProxySQL admin interface, you need a mysql
client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql
client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally. For this tutorial, install Percona XtraDB Cluster on Node 4:
Changes in the installation procedure
In Percona XtraDB Cluster 8.0, ProxySQL is not installed automatically as a dependency of the percona-xtradb-cluster-client-8.0
package. You should install the proxysql
package separately.
Note
ProxySQL has multiple versions in the version 2 series.
root@proxysql:~# apt install percona-xtradb-cluster-client\nroot@proxysql:~# apt install proxysql2\n
$ sudo yum install Percona-XtraDB-Cluster-client-80\n$ sudo yum install proxysql2\n
To connect to the admin interface, use the credentials, host name and port specified in the global variables.
Warning
Do not use default credentials in production!
The following example shows how to connect to the ProxySQL admin interface with default credentials:
root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql@proxysql>\n
To see the ProxySQL databases and tables use the following commands:
mysql@proxysql> SHOW DATABASES;\n
The following output shows the ProxySQL databases:
Expected output+-----+---------+-------------------------------+\n| seq | name | file |\n+-----+---------+-------------------------------+\n| 0 | main | |\n| 2 | disk | /var/lib/proxysql/proxysql.db |\n| 3 | stats | |\n| 4 | monitor | |\n+-----+---------+-------------------------------+\n4 rows in set (0.00 sec)\n
mysql@proxysql> SHOW TABLES;\n
The following output shows the ProxySQL tables:
Expected output+--------------------------------------+\n| tables |\n+--------------------------------------+\n| global_variables |\n| mysql_collations |\n| mysql_query_rules |\n| mysql_replication_hostgroups |\n| mysql_servers |\n| mysql_users |\n| runtime_global_variables |\n| runtime_mysql_query_rules |\n| runtime_mysql_replication_hostgroups |\n| runtime_mysql_servers |\n| runtime_scheduler |\n| scheduler |\n+--------------------------------------+\n12 rows in set (0.00 sec)\n
For more information about admin databases and tables, see Admin Tables
Note
The ProxySQL configuration can reside in the following areas:
MEMORY (your current working place)
RUNTIME (the production settings)
DISK (durable configuration, saved inside an SQLITE database)
When you change a parameter, you change it in MEMORY area. This ability is by design and lets you test the changes before pushing the change to production (RUNTIME), or save the change to disk.
"},{"location":"load-balance-proxysql.html#add-cluster-nodes-to-proxysql","title":"Add cluster nodes to ProxySQL","text":"To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers
table.
Note
ProxySQL uses the concept of hostgroups to group cluster nodes. This enables you to balance the load in a cluster by routing different types of traffic to different groups. There are many ways you can configure hostgroups (for example, source and replicas, read and write load, etc.) and a every node can be a member of multiple hostgroups.
This example adds three Percona XtraDB Cluster nodes to the default hostgroup (0
), which receives both write and read traffic:
mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.71',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.72',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.73',3306);\n
To see the nodes:
mysql@proxysql> SELECT * FROM mysql_servers;\n
The following output shows the list of nodes:
Expected output+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| 0 | 192.168.70.71 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |\n| 0 | 192.168.70.72 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |\n| 0 | 192.168.70.73 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n3 rows in set (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#create-proxysql-monitoring-user","title":"Create ProxySQL monitoring user","text":"To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE
privilege on any node in the cluster and configure the user in ProxySQL.
The following example shows how to add a monitoring user on Node 2:
mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\nmysql@pxc2> GRANT USAGE ON *.* TO 'proxysql'@'%';\n
The following example shows how to configure this user on the ProxySQL node:
mysql@proxysql> UPDATE global_variables SET variable_value='proxysql'\n WHERE variable_name='mysql-monitor_username';\nmysql@proxysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\n WHERE variable_name='mysql-monitor_password';\n
To load this configuration at runtime, issue a LOAD
command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue a SAVE
command.
mysql@proxysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql@proxysql> SAVE MYSQL VARIABLES TO DISK;\n
To ensure that monitoring is enabled, check the monitoring logs:
mysql@proxysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+----------------------+---------------+\n| hostname | port | time_start_us | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695 | NULL |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779 | NULL |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627 | NULL |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557 | NULL |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737 | NULL |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447 | NULL |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+-------------------+------------+\n| hostname | port | time_start_us | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948 | NULL |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803 | NULL |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711 | NULL |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783 | NULL |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631 | NULL |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542 | NULL |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n
The previous examples show that ProxySQL is able to connect and ping the nodes you have added.
To enable monitoring of these nodes, load them at runtime:
mysql@proxysql> LOAD MYSQL SERVERS TO RUNTIME;\n
"},{"location":"load-balance-proxysql.html#create-proxysql-client-user","title":"Create ProxySQL client user","text":"ProxySQL must have users that can access backend nodes to manage connections.
To add a user, insert credentials into mysql_users
table:
mysql@proxysql> INSERT INTO mysql_users (username,password) VALUES ('sbuser','sbpass');\n
Expected output Query OK, 1 row affected (0.00 sec)\n
Note
ProxySQL currently doesn\u2019t encrypt passwords.
Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):
mysql@proxysql> LOAD MYSQL USERS TO RUNTIME;\nmysql@proxysql> SAVE MYSQL USERS TO DISK;\n
To confirm that the user has been set up correctly, you can try to log in as root:
root@proxysql:~# mysql -u sbuser -psbpass -h 127.0.0.1 -P 6033\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n
To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:
mysql@pxc3> CREATE USER 'sbuser'@'192.168.70.74' IDENTIFIED BY 'sbpass';\n
Expected output Query OK, 0 rows affected (0.01 sec)\n
mysql@pxc3> GRANT ALL ON *.* TO 'sbuser'@'192.168.70.74';\n
Expected output Query OK, 0 rows affected (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#test-cluster-with-sysbench","title":"Test cluster with sysbench","text":"You can install sysbench
from Percona software repositories:
root@proxysql:~# apt install sysbench\n
root@proxysql:~# yum install sysbench\n
Note
sysbench
requires ProxySQL client user credentials that you created in Creating ProxySQL Client User.
Create the database that will be used for testing on one of the Percona XtraDB Cluster nodes:
mysql@pxc1> CREATE DATABASE sbtest;\n
Populate the table with data for the benchmark on the ProxySQL node:
root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nprepare\n
Run the benchmark on the ProxySQL node:
root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nrun\n
ProxySQL stores collected data in the stats
schema:
mysql@proxysql> SHOW TABLES FROM stats;\n
Expected output +--------------------------------+\n| tables |\n+--------------------------------+\n| stats_mysql_query_rules |\n| stats_mysql_commands_counters |\n| stats_mysql_processlist |\n| stats_mysql_connection_pool |\n| stats_mysql_query_digest |\n| stats_mysql_query_digest_reset |\n| stats_mysql_global |\n+--------------------------------+\n
For example, to see the number of commands that run on the cluster:
mysql@proxysql> SELECT * FROM stats_mysql_commands_counters;\n
Expected output +---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| Command | Total_Time_us | Total_cnt | cnt_100us | cnt_500us | cnt_1ms | cnt_5ms | cnt_10ms | cnt_50ms | cnt_100ms | cnt_500ms | cnt_1s | cnt_5s | cnt_10s | cnt_INFs |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| ALTER_TABLE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| ANALYZE_TABLE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| BEGIN | 2212625 | 3686 | 55 | 2162 | 899 | 569 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| CHANGE_REPLICATION_SOURCE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| COMMIT | 21522591 | 3628 | 0 | 0 | 0 | 1765 | 1590 | 272 | 1 | 0 | 0 | 0 | 0 | 0 |\n| CREATE_DATABASE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| CREATE_INDEX | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n...\n| DELETE | 2904130 | 3670 | 35 | 1546 | 1346 | 723 | 19 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |\n| DESCRIBE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n...\n| INSERT | 19531649 | 3660 | 39 | 1588 | 1292 | 723 | 12 | 2 | 0 | 1 | 0 | 1 | 2 | 0 |\n...\n| SELECT | 35049794 | 51605 | 501 | 26180 | 16606 | 8241 | 70 | 3 | 4 | 0 | 0 | 0 | 0 | 0 |\n| SELECT_FOR_UPDATE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n...\n| UPDATE | 6402302 | 7367 | 75 | 2503 | 3020 | 1743 | 23 | 3 | 0 | 0 | 0 | 0 | 0 | 0 |\n| USE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n| SHOW | 19691 | 2 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |\n| UNKNOWN | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n45 rows in set (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#automatic-failover","title":"Automatic failover","text":"ProxySQL will automatically detect if a node is not available or not synced with the cluster.
You can check the status of all available nodes by running:
mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
The following output shows the status of all available nodes:
Expected output+--------------+---------------+------+--------+\n| hostgroup_id | hostname | port | status |\n+--------------+---------------+------+--------+\n| 0 | 192.168.70.71 | 3306 | ONLINE |\n| 0 | 192.168.70.72 | 3306 | ONLINE |\n| 0 | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n
To test problem detection and fail-over mechanism, shut down Node 3:
root@pxc3:~# service mysql stop\n
ProxySQL will detect that the node is down and update its status to OFFLINE_SOFT
:
mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
Expected output +--------------+---------------+------+--------------+\n| hostgroup_id | hostname | port | status |\n+--------------+---------------+------+--------------+\n| 0 | 192.168.70.71 | 3306 | ONLINE |\n| 0 | 192.168.70.72 | 3306 | ONLINE |\n| 0 | 192.168.70.73 | 3306 | OFFLINE_SOFT |\n+--------------+---------------+------+--------------+\n3 rows in set (0.00 sec)\n
Now start Node 3 again:
root@pxc3:~# service mysql start\n
The script will detect the change and mark the node as ONLINE
:
mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
Expected output +--------------+---------------+------+--------+\n| hostgroup_id | hostname | port | status |\n+--------------+---------------+------+--------+\n| 0 | 192.168.70.71 | 3306 | ONLINE |\n| 0 | 192.168.70.72 | 3306 | ONLINE |\n| 0 | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n
"},{"location":"load-balance-proxysql.html#assisted-maintenance-mode","title":"Assisted maintenance mode","text":"Usually, to take a node down for maintenance, you need to identify that node, update its status in ProxySQL to OFFLINE_SOFT
, wait for ProxySQL to divert traffic from this node, and then initiate the shutdown or perform maintenance tasks. Percona XtraDB Cluster includes a special maintenance mode for nodes that enables you to take a node down without adjusting ProxySQL manually.
Initiating pxc_maint_mode=MAINTENANCE
does not disconnect existing connections. You must terminate these connections by either running your application code or forcing a re-connection. With a re-connection, the new connections are re-routed around the PXC node in MAINTENANCE
mode.
Assisted maintenance mode is controlled via the pxc_maint_mode
variable, which is monitored by ProxySQL and can be set to one of the following values:
DISABLED
: This value is the default state that tells ProxySQL to route traffic to the node as usual.
SHUTDOWN
: This state is set automatically when you initiate node shutdown.
You may need to shut down a node when upgrading the OS, adding resources, changing hardware parts, relocating the server, etc.
When you initiate node shutdown, Percona XtraDB Cluster does not send the signal immediately. Intead, it changes the state to pxc_maint_mode=SHUTDOWN
and waits for a predefined period (10 seconds by default). When ProxySQL detects that the mode is set to SHUTDOWN
, it changes the status of this node to OFFLINE_SOFT
. This status stops creating new node connections. After the transition period, long-running active transactions are aborted.
MAINTENANCE
: You can change to this state if you need to perform maintenance on a node without shutting it down.
You may need to isolate the node for a specific time so that it does not receive traffic from ProxySQL while you resize the buffer pool, truncate the undo log, defragment, or check disks, etc.
To do this, manually set pxc_maint_mode=MAINTENANCE
. Control is not returned to the user for a predefined period (10 seconds by default). You can increase the transition period using the pxc_maint_transition_period
variable to accommodate long-running transactions. If the period is long enough for all transactions to finish, there should be little disruption in the cluster workload. If you increase the transition period, the packaging script may determine the wait as a server stall.
When ProxySQL detects that the mode is set to MAINTENANCE
, it stops routing traffic to the node. During the transition period, any existing connections continue, but ProxySQL avoids opening new connections and starting transactions. Still, the user can open connections to monitor status.
Once control is returned, you can perform maintenance activity.
Note
Data changes continue to be replicated across the cluster.
After you finish maintenance, set the mode back to DISABLED
. When ProxySQL detects this, it starts routing traffic to the node again.
Related sections
Setting up a testing environment with ProxySQL
"},{"location":"load-balance-proxysql.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"monitoring.html","title":"Monitor the cluster","text":"Each node can have a different view of the cluster. There is no centralized node to monitor. To track down the source of issues, you have to monitor each node independently.
Values of many variables depend on the node from which you are querying. For example, replication sent from a node and writes received by all other nodes.
Having data from all nodes can help you understand where flow messages are coming from, which node sends excessively large transactions, and so on.
"},{"location":"monitoring.html#manual-monitoring","title":"Manual monitoring","text":"Manual cluster monitoring can be performed using myq-tools.
"},{"location":"monitoring.html#alerting","title":"Alerting","text":"Besides standard MySQL alerting, you should use at least the following triggers specific to Percona XtraDB Cluster:
wsrep_cluster_status
!= Primary
wsrep_connected
!= ON
wsrep_ready
!= ON
For additional alerting, consider the following:
Excessive replication conflicts can be identtified using the wsrep_local_cert_failures
and wsrep_local_bf_aborts
variables
Excessive flow control messages can be identified using the wsrep_flow_control_sent
and wsrep_flow_control_recv
variables
Large replication queues can be identified using the wsrep_local_recv_queue
.
Cluster metrics collection for long-term graphing should be done at least for the following:
wsrep_local_recv_queue
and wsrep_local_send_queue
wsrep_flow_control_sent
and wsrep_flow_control_recv
wsrep_replicated
and wsrep_received
wsrep_replicated_bytes
and wsrep_received_bytes
wsrep_local_cert_failures
and wsrep_local_bf_aborts
Percona Monitoring and Management includes two dashboards to monitor PXC:
PXC/Galera Cluster Overview:
PXC/Galera Graphs:
These dashboards are available from the menu:
Please refer to the official documentation for details on Percona Monitoring and Management installation and setup.
"},{"location":"monitoring.html#other-reading","title":"Other reading","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"nbo.html","title":"Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU)","text":"An Online Schema Upgrade can be a daily issue in an environment with accelerated development and deployment. The task becomes more difficult as the data grows. An ALTER TABLE
statement is a multi-step operation and must run until it is complete. Aborting the statement may be more expensive than letting it complete.
The Non-Blocking Operations (NBO) method is similar to the TOI
method (see Online Schema Upgrade for more information on the available types of online schema upgrades). Every replica processes the DDL statement at the same point in the cluster transaction stream, and other transactions cannot commit during the operation. The NBO
method provides a more efficient locking strategy and avoids the TOI
issue of long-running DDL statements blocking cluster updates.
In the NBO method, the supported DDL statement acquires a metadata lock on the table or schema at a late stage of the operation. The lock_wait_timeout
system variable defines the timeout, measured in seconds, to acquire metadata locks. The default value, 3153600, could cause infinite
waits and should not be used with the NBO
method.
Attempting a State Snapshot Transfer (SST) fails during the NBO operation.
To dynamically set the NBO
mode in the client, run the following statement:
SET SESSION wsrep_OSU_method='NBO';\n
"},{"location":"nbo.html#supported-ddl-statements","title":"Supported DDL statements","text":"The NBO method supports the following DDL statements:
ALTER TABLE
ALTER INDEX
CREATE INDEX
DROP INDEX
The NBO
method does not support the following:
Running two DDL statements with conflicting locks on the same table. For example, you cannot run two ALTER TABLE
statements for an employees
table.
Modifying a table changed during the NBO operation. However, you can modify other tables and execute NBO queries on other tables.
See the Percona XtraDB Cluster 8.0.25-15.1 Release notes for the latest information.
"},{"location":"nbo.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"online-schema-upgrade.html","title":"Online schema upgrade","text":"Database schemas must change as applications change. For a cluster, the schema upgrade must occur while the system is online. A synchronous cluster requires all active nodes have the same data. Schema updates are performed using Data Definition Language (DDL) statements, such as ALTER TABLE <table_name> DROP COLUMN <column_name>
.
The DDL statements are non-transactional, so these statements use up-front locking to avoid the chance of deadlocks and cannot be rolled back. We recommend that you test your schema changes, especially if you must run an ALTER
statement on large tables. Verify the backups before updating the schemas in the production environment. A failure in a schema change can cause your cluster to drop nodes and lose data.
Percona XtraDB Cluster supports the following methods for making online schema changes:
Method Name Reason for use Description TOI or Total Order Isolation Consistency is important. Other transactions are blocked while the cluster processes the DDL statements. This is the default method for the wsrep-OSU-method variable. The isolation of the DDL statement guarantees consistency. The DDL replication uses a Statement format. Each node processes the replicated DDL statement at same position in the replication stream. All other writes must wait until the DDL statement is executed. While a DDL statement is running, any long-running transactions in progress and using the same resource receive a deadlock error at commit and are rolled back. The pt-online-schema-change in the Percona Toolkit can alter the table without using locks. There are limitations: only InnoDB tables can be altered, and thewsrep_OSU_method
must be TOI
. RSU or Rolling Schema Upgrade This method guarantees high availability during the schema upgrades. The node desynchronizes with the cluster and disables flow control during the execution of the DDL statement. The rest of the cluster is not affected. After the statement execution, the node applies delayed events and synchronizes with the cluster. Although the cluster is active, during the process some nodes have the newer schema and some nodes have the older schema. The RSU method is a manual operation. For this method, the gcache
must be large enough to store the data for the duration of the DDL change. NBO or Non-Blocking Operation This method is used when consistency is important and uses a more efficient locking strategy. This method is similar to TOI
. DDL operations acquire an exclusive metadata lock on the table or schema at a late stage of the operation when updating the table or schema definition. Attempting a State Snapshot Transfer (SST) fails during the NBO operation. This mode uses a more efficient locking strategy and avoids the TOI
issue of long-running DDL statements blocking other updates in the cluster."},{"location":"online-schema-upgrade.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"performance-schema-instrumentation.html","title":"Perfomance Schema instrumentation","text":"To improve monitoring Percona XtraDB Cluster has implemented an infrastructure to expose Galera instruments (mutexes, cond-variables, files, threads) as a part of PERFORMANCE_SCHEMA
.
Although mutexes and condition variables from wsrep
were already part of PERFORMANCE_SCHEMA
threads were not.
Mutexes, condition variables, threads, and files from Galera library also were not part of the PERFORMANCE_SCHEMA
.
You can see the complete list of available instruments by running:
mysql> SELECT * FROM performance_schema.setup_instruments WHERE name LIKE '%galera%' OR name LIKE '%wsrep%';\n
Expected output +----------------------------------------------------------+---------+-------+\n| NAME | ENABLED | TIMED |\n+----------------------------------------------------------+---------+-------+\n| wait/synch/mutex/sql/LOCK_wsrep_ready | NO | NO |\n| wait/synch/mutex/sql/LOCK_wsrep_sst | NO | NO |\n| wait/synch/mutex/sql/LOCK_wsrep_sst_init | NO | NO |\n...\n| stage/wsrep/wsrep: in rollback thread | NO | NO |\n| stage/wsrep/wsrep: aborter idle | NO | NO |\n| stage/wsrep/wsrep: aborter active | NO | NO |\n+----------------------------------------------------------+---------+-------+\n73 rows in set (0.00 sec)\n
Some of the most important are:
Two main actions that Galera does are REPLICATION
and ROLLBACK
. Mutexes, condition variables, and threads related to this are part of PERFORMANCE_SCHEMA
.
Galera internally uses monitor mechanism to enforce ordering of events. These monitor control events apply and are mainly responsible for the wait between different action. All such monitor mutexes and condition variables are covered as part of this implementation.
There are lot of other miscellaneous action related to receiving of package and servicing messages. Mutexes and condition variables needed for them are now visible too. Threads that manage receiving and servicing are also being instrumented.
This feature has exposed all the important mutexes, condition variables that lead to lock/threads/files as part of this process.
Besides exposing file it also tracks write/read bytes like stats for file. These stats are not exposed for Galera files as Galera uses mmap
.
Also, there are some threads that are short-lived and created only when needed especially for SST/IST purpose. They are also tracked but come into PERFORMANCE_SCHEMA
tables only if/when they are created.
Stage Info
from Galera specific function which server updates to track state of running thread is also visible in PERFORMANCE_SCHEMA
.
Galera uses customer data-structure in some cases (like STL structures). Mutexes used for protecting these structures which are not part of mainline Galera logic or doesn\u2019t fall in big-picture are not tracked. Same goes with threads that are gcomm
library specific.
Galera maintains a process vector inside each monitor for its internal graph creation. This process vector is 65K in size and there are two such vectors per monitor. That is 128K * 3 = 384K condition variables. These are not tracked to avoid hogging PERFORMANCE_SCHEMA
limits and sidelining of the main crucial information.
pxc_cluster_view
","text":"The pxc_cluster_view
- provides a unified view of the cluster. The table is in the Performance_Schema database.
DESCRIBE pxc_cluster_view;\n
This table has the following definition:
Expected output+-------------+--------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------------+--------------+------+-----+---------+-------+\n| HOST_NAME | char(64) | NO | | NULL | |\n| UUID | char(36) | NO | | NULL | |\n| STATUS | char(64) | NO | | NULL | |\n| LOCAL_INDEX | int unsigned | NO | | NULL | |\n| SEGMENT | int unsigned | NO | | NULL | |\n+-------------+--------------+------+-----+---------+-------+\n5 rows in set (0.00 sec)\n
To view the table, run the following query:
SELECT * FROM pxc_cluster_view;\n
Expected output +-----------+--------------------------------------+--------+-------------+---------+\n| HOST_NAME | UUID | STATUS | LOCAL_INDEX | SEGMENT |\n+-----------+--------------------------------------+--------+-------------+---------+\n| node1 | 22b9d47e-c215-11eb-81f7-7ed65a9d253b | SYNCED | 0 | 0 |\n| node3 | 29c51cf5-c216-11eb-9101-1ba3a28e377a | SYNCED | 1 | 0 |\n| node2 | 982cdb03-c215-11eb-9865-0ae076a59c5c | SYNCED | 2 | 0 |\n+-----------+--------------------------------------+--------+-------------+---------+\n3 rows in set (0.00 sec)\n
"},{"location":"performance-schema-instrumentation.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"proxysql-v2.html","title":"ProxySQL admin utilities","text":"The ProxySQL and ProxySQL admin utilities documentation provides information on installing and running ProxySQL 1.x.x or ProxySQL 2.x.x with the following ProxySQL admin utilities:
The ProxySQL Admin simplifies the configuration of Percona XtraDB Cluster nodes with ProxySQL.
The Percona Scheduler Admin tool can automatically perform a failover due to node failures, service degradation, or maintenance.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"quickstart-overview.html","title":"Quickstart Guide for Percona XtraDB Cluster","text":"Percona XtraDB Cluster (PXC) is a 100% open source, enterprise-grade, highly available clustering solution for MySQL multi-master setups based on Galera. PXC helps enterprises minimize unexpected downtime and data loss, reduce costs, and improve performance and scalability of your database environments supporting your critical business applications in the most demanding public, private, and hybrid cloud environments.
"},{"location":"quickstart-overview.html#install-percona-xtradb-cluster","title":"Install Percona XtraDB Cluster","text":"You can install Percona XtraDB Cluster using different methods.
Percona Server for MySQL (PS) is a freely available, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior and optimized performance, greater scalability and availability, enhanced backups, increased visibility, and instrumentation. Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads.
Install Percona Server for MySQL.
"},{"location":"quickstart-overview.html#for-backups-and-restores","title":"For backups and restores","text":"Percona XtraBackup (PXB) is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL\u00ae that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems. Maintain fully available applications during planned maintenance windows with Percona XtraBackup.
Install Percona XtraBackup
"},{"location":"quickstart-overview.html#for-monitoring-and-management","title":"For Monitoring and Management","text":"Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details.
Install PMM and connect your MySQL instances to it.
"},{"location":"quickstart-overview.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"restarting-nodes.html","title":"Restart the cluster nodes","text":"To restart a cluster node, shut down MySQL and restarting it. The node should leave the cluster (and the total vote count for quorum should decrement).
When it rejoins, the node should synchronize using IST. If the set of changes needed for IST are not found in the gcache
file on any other node in the entire cluster, then SST will be performed instead. Therefore, restarting cluster nodes for rolling configuration changes or software upgrades is rather simple from the cluster\u2019s perspective.
Note
If you restart a node with an invalid configuration change that prevents MySQL from loading, Galera will drop the node\u2019s state and force an SST for that node.
Note
If MySQL fails for any reason, it will not remove its PID file (which is by design deleted only on clean shutdown). Obviously server will not restart if existing PID file is present. So in case of encountered MySQL failure for any reason with the relevant records in log, PID file should be removed manually.
"},{"location":"restarting-nodes.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"secure-network.html","title":"Secure the network","text":"By default, anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. This could potentially let them query your data or get a complete copy of it.
In general, it is a good idea to disable all remote connections to Percona XtraDB Cluster nodes. If you require clients or nodes from outside of your network to connect, you can set up a VPN (virtual private network) for this purpose.
"},{"location":"secure-network.html#firewall-configuration","title":"Firewall configuration","text":"A firewall can let you filter Percona XtraDB Cluster traffic based on the clients and nodes that you trust.
By default, Percona XtraDB Cluster nodes use the following ports:
3306 is used for MySQL client connections and SST (State Snapshot Transfer) via mysqldump
.
4444 is used for SST via Percona XtraBackup.
4567 is used for write-set replication traffic (over TCP) and multicast replication (over TCP and UDP).
4568 is used for IST (Incremental State Transfer).
Ideally you want to make sure that these ports on each node are accessed only from trusted IP addresses. You can implement packet filtering using iptables
, firewalld
, pf
, or any other firewall of your choice.
To restrict access to Percona XtraDB Cluster ports using iptables
, you need to append new rules to the INPUT
chain on the filter table. In the following example, the trusted range of IP addresses is 192.168.0.1/24. It is assumed that only Percona XtraDB Cluster nodes and clients will connect from these IPs. To enable packet filtering, run the commands as root on each Percona XtraDB Cluster node.
# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 3306 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 4444 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 4567 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol tcp --match tcp --dport 4568 \\\n --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n --protocol udp --match udp --dport 4567 \\\n --source 192.168.0.1/24 --jump ACCEPT\n
Note
The last one opens port 4567 for multicast replication over UDP.
If the trusted IPs are not in sequence, you will need to run these commands for each address on each node. In this case, you can consider to open all ports between trusted hosts. This is a little bit less secure, but reduces the amount of commands. For example, if you have three Percona XtraDB Cluster nodes, you can run the following commands on each one:
# iptables --append INPUT --protocol tcp \\\n --source 64.57.102.34 --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n --source 193.166.3.20 --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n --source 193.125.4.10 --jump ACCEPT\n
Running the previous commands will allow TCP connections from the IP addresses of the other Percona XtraDB Cluster nodes.
Note
The changes that you make in iptables
are not persistent unless you save the packet filtering state:
# service save iptables\n
For distributions that use systemd
, you need to save the current packet filtering rules to the path where iptables
reads from when it starts. This path can vary by distribution, but it is usually in the /etc
directory. For example:
/etc/sysconfig/iptables
/etc/iptables/iptables.rules
Use iptables-save
to update the file:
# iptables-save > /etc/sysconfig/iptables\n
"},{"location":"secure-network.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"security-index.html","title":"Security basics","text":"By default, Percona XtraDB Cluster does not provide any protection for stored data. There are several considerations to take into account for securing Percona XtraDB Cluster:
Anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. You should consider restricting access using VPN and filter traffic on ports used by Percona XtraDB Cluster.
Unencrypted traffic can potentially be viewed by anyone monitoring your network. In Percona XtraDB Cluster 8.0 traffic encryption is enabled by default.
Percona XtraDB Cluster supports tablespace encryption to provide at-rest encryption for physical tablespace data files.
For more information, see the following blog post:
* [MySQL Data at Rest Encryption](https://www.percona.com/blog/2016/04/08/mysql-data-at-rest-encryption/)\n
"},{"location":"security-index.html#security-modules","title":"Security modules","text":"Most modern distributions include special security modules that control access to resources for users and applications. By default, these modules will most likely constrain communication between Percona XtraDB Cluster nodes.
The easiest solution is to disable or remove such programs, however, this is not recommended for production environments. You should instead create necessary security policies for Percona XtraDB Cluster.
"},{"location":"security-index.html#selinux","title":"SELinux","text":"SELinux is usually enabled by default in Red Hat Enterprise Linux and derivatives (including CentOS). SELinux helps protects the user\u2019s home directory data and provides the following:
Prevents unauthorized users from exploiting the system
Allows authorized users to access files
Used as a role-based access control system
To help with troubleshooting, during installation and configuration, you can set the mode to permissive
:
$ setenforce 0\n
Note
This action changes the mode only at runtime.
See also
For more information, see Enabling AppArmor
"},{"location":"security-index.html#apparmor","title":"AppArmor","text":"AppArmor is included in Debian and Ubuntu. Percona XtraDB Cluster contains several AppArmor profiles which allows for easier maintenance. To help with troubleshooting, during the installation and configuration, you can set the mode to complain
for mysqld
.
See also
For more information, see Enabling AppArmor
"},{"location":"security-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"selinux.html","title":"Enable SELinux","text":"SELinux helps protects the user\u2019s home directory data. SELinux provides the following:
Prevents unauthorized users from exploiting the system
Allows authorized users to access files
Used as a role-based access control system
For more information, see Percona Server and SELinux
Red Hat and CentOS distributes a policy module to extend the SELinux policy module for mysqld. We provide the following:
Extended module for pxc - an extension of the default module for mysqld distributed by the operating system.
wsrep-sst-xtrabackup-v2 - allows execution of the xtrabackup-v2 SST script
Modifications described in Percona Server and SELinux can also be applied for Percona XtraDB Cluster.
To adjust PXC-specific configurations, especially SST/IST ports, use the following procedures as root
:
To enable port 14567
instead of the default port 4567
:
Find the tag associated with the 4567
port:
$ semanage port -l | grep 4567\ntram_port_t tcp 4567\n
Run a command to find which rules grant mysqld access to the port:
$ sesearch -A -s mysqld_t -t tram_port_t -c tcp_socket\nFound 5 semantic av rules:\n allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n allow mysqld_t tram_port_t : tcp_socket { name_bind name_connect } ;\n allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n allow mysqld_t port_type : tcp_socket name_connect ;\n allow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n
You could tag port 14567 with the tramp_port_t
tag, but this tag may cause issues because port 14567 is not a TRAM port. Use the general mysqld_port_t
tag to add ports. For example, the following command adds port 14567 to the policy module with the mysqld_port_t
tag.
$ semanage port -a -t mysqld_port_t -p tcp 14567\n
You can verify the addition with the following command:
$ semanage port -l | grep 14567\nmysqld_port_t tcp 4568, 14567, 1186, 3306, 63132-63164\n
To see the tag associated with the 4444 port, run the following command:
$ semanage port -l | grep 4444\nkerberos_port_t tcp 88, 750, 4444\nkerberos_port_t udp 88, 750, 4444\n
To find the rules associated with kerberos_port_t
, run the following:
$ sesearch -A -s mysqld_t -t kerberos_port_t -c tcp_socket\nFound 9 semantic av rules:\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t rpc_port_type : tcp_socket name_bind ;\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t port_type : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket { recv_msg send_msg } ;\nallow nsswitch_domain reserved_port_type : tcp_socket name_connect ;\nallow mysqld_t reserved_port_type : tcp_socket name_connect ;\nallow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n
If you require port 14444 added, use the same method used to add port 14567.
If you must use a port that is already tagged, you can use either of the following ways:
Change the port tag to mysqld_port_t
Adjust the mysqld/sst script policy module to allow access to the given port. This method is better since all PXC-related adjustments are within the PXC-related policy modules.
pxc_encrypt_cluster_traffic
","text":"By default, the pxc_encrypt_cluster_traffic
is ON
, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.
Review How to set up the certificates. When SELinux is enabled, mysqld must have access to these certificates. The following items must be checked or considered:
Certificates inside /etc/mysql/certs/
directory must use the mysqld_etc_t
tag. This tag is applied automatically when the files are copied into the directory. When they are moved, the files retain their original context.
Certificates are accessible to the mysql user. The server certificates should be readable only by this user.
Certificates without the proper SELinux context can be restored with the following command:
$ restorecon -v /etc/mysql/certs/*\n
"},{"location":"selinux.html#enable-enforcing-mode-for-pxc","title":"Enable enforcing mode for PXC","text":"The process, mysqld, runs in permissive mode, by default, even if SELinux runs in enforcing mode:
$ semodule -l | grep permissive\npermissive_mysqld_t\npermissivedomains\n
After ensuring that the system journal does not list any issues, the administrator can remove the permissive mode for mysqld_t:
$ semanage permissive -d mysqld_t\n
See also
MariaDB 10.2 Galera Cluster with SELinux-enabled on CentOS 7
"},{"location":"selinux.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"set-up-3nodes-ec2.html","title":"How to set up a three-node cluster in EC2 environment","text":"This manual assumes you are running three EC2 instances with Red Hat Enterprise Linux 7 64-bit.
node1
: 10.93.46.58
node2
: 10.93.46.59
node3
: 10.93.46.60
Select instance types that support Enhanced Networking functionality. Good network performance critical for synchronous replication used in Percona XtraDB Cluster.
When adding instance storage volumes, choose the ones with good I/O performance:
instances with NVMe are preferred
GP2 SSD are preferred to GP3 SSD volume types due to I/O latency
over sized GP2 SSD are preferred to IO1 volume types due to cost
Attach Elastic network interfaces with static IPs or assign Elastic IP addresses to your instances. Thereby IP addresses are preserved on instances in case of reboot or restart. This is required as each Percona XtraDB Cluster member includes the wsrep_cluster_address
option in its configuration which points to other cluster members.
Launch instances in different availability zones to avoid cluster downtime in case one of the zones experiences power loss or network connectivity issues.
See also
Amazon EC2 Documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html
To set up Percona XtraDB Cluster:
Remove Percona XtraDB Cluster and Percona Server for MySQL packages for older versions:
Percona XtraDB Cluster 5.6, 5.7
Percona Server for MySQL 5.6, 5.7
Install Percona XtraDB Cluster as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS.
Create data directories:
$ mkdir -p /mnt/data\n$ mysql_install_db --datadir=/mnt/data --user=mysql\n
Stop the firewall service:
$ service iptables stop\n
Note
Alternatively, you can keep the firewall running, but open ports 3306, 4444, 4567, 4568. For example to open port 4567 on 192.168.0.1:
$ iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT\n
Create /etc/my.cnf
files:
Contents of the configuration file on the first node:
[mysqld]\ndatadir=/mnt/data\nuser=mysql\n\nbinlog_format=ROW\n\nwsrep_provider=/usr/lib64/libgalera_smm.so\nwsrep_cluster_address=gcomm://10.93.46.58,10.93.46.59,10.93.46.60\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node1\n\ninnodb_autoinc_lock_mode=2\n
For the second and third nodes change the following lines:
wsrep_node_name=node2\n\nwsrep_node_name=node3\n
Start and bootstrap Percona XtraDB Cluster on the first node:
[root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
Expected output 2014-01-30 11:52:35 23280 [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n
Start the second and third nodes:
$ sudo systemctl start mysql\n
Expected output ... [Note] WSREP: Flow-control interval: [28, 28]\n... [Note] WSREP: Restored state OPEN -> JOINED (2)\n... [Note] WSREP: Member 2 (percona1) synced with group.\n... [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n... [Note] WSREP: New cluster view: global state: 4827a206-876b-11e3-911c-3e6a77d54953:2, view# 7: Primary, number of nodes: 3, my index: 2, protocol version 2\n... [Note] WSREP: SST complete, seqno: 2\n... [Note] Plugin 'FEDERATED' is disabled.\n... [Note] InnoDB: The InnoDB memory heap is disabled\n... [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins\n... [Note] InnoDB: Compressed tables use zlib 1.2.3\n... [Note] InnoDB: Using Linux native AIO\n... [Note] InnoDB: Not using CPU crc32 instructions\n... [Note] InnoDB: Initializing buffer pool, size = 128.0M\n... [Note] InnoDB: Completed initialization of buffer pool\n... [Note] InnoDB: Highest supported file format is Barracuda.\n... [Note] InnoDB: 128 rollback segment(s) are active.\n... [Note] InnoDB: Waiting for purge to start\n... [Note] InnoDB: Percona XtraDB (http://www.percona.com) ... started; log sequence number 1626341\n... [Note] RSA private key file not found: /var/lib/mysql//private_key.pem. Some authentication plugins will not work.\n... [Note] RSA public key file not found: /var/lib/mysql//public_key.pem. Some authentication plugins will not work.\n... [Note] Server hostname (bind-address): '*'; port: 3306\n... [Note] IPv6 is available.\n... [Note] - '::' resolves to '::';\n... [Note] Server socket created on IP: '::'.\n... [Note] Event Scheduler: Loaded 0 events\n... [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n... [Note] WSREP: inited wsrep sidno 1\n... [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.\n... [Note] WSREP: REPL Protocols: 5 (3, 1)\n... [Note] WSREP: Assign initial position for certification: 2, protocol version: 3\n... [Note] WSREP: Service thread queue flushed.\n... [Note] WSREP: Synchronized with group, ready for connections\n
When all nodes are in SYNCED state, your cluster is ready.
You can try connecting to MySQL on any node and create a database:
$ mysql -uroot\n> CREATE DATABASE hello_tom;\n
The new database will be propagated to all nodes. If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"singlebox.html","title":"How to set up a three-node cluster on a single box","text":"This tutorial describes how to set up a 3-node cluster on a single physical box.
For the purposes of this tutorial, assume the following:
The local IP address is 192.168.2.21
.
Percona XtraDB Cluster is extracted from binary tarball into /usr/local/Percona-XtraDB-Cluster-8.0.x86_64
To set up the cluster:
Create three MySQL configuration files for the corresponding nodes:
/etc/my.4000.cnf
[mysqld]\nport = 4000\nsocket=/tmp/mysql.4000.sock\ndatadir=/data/bench/d1\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:5030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:4020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:4030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node4000\ninnodb_autoinc_lock_mode=2\n
/etc/my.5000.cnf
[mysqld]\nport = 5000\nsocket=/tmp/mysql.5000.sock\ndatadir=/data/bench/d2\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:5020\nwsrep_node_incoming_address=192.168.2.21\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:5030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node5000\ninnodb_autoinc_lock_mode=2\n
/etc/my.6000.cnf
[mysqld]\nport = 6000\nsocket=/tmp/mysql.6000.sock\ndatadir=/data/bench/d3\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:5030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:6020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:6030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node6000\ninnodb_autoinc_lock_mode=2\n
Create three data directories for the nodes:
/data/bench/d1
/data/bench/d2
/data/bench/d3
Start the first node using the following command (from the Percona XtraDB Cluster install directory):
$ bin/mysqld_safe --defaults-file=/etc/my.4000.cnf --wsrep-new-cluster\n
If the node starts correctly, you should see the following output:
Expected output111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)\n111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1\n
To check the ports, run the following command:
$ netstat -anp | grep mysqld\ntcp 0 0 192.168.2.21:4030 0.0.0.0:* LISTEN 21895/mysqld\ntcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 21895/mysqld\n
Start the second and third nodes:
$ bin/mysqld_safe --defaults-file=/etc/my.5000.cnf\n$ bin/mysqld_safe --defaults-file=/etc/my.6000.cnf\n
If the nodes start and join the cluster successful, you should see the following output:
111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)\n111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections\n
To check the cluster size, run the following command:
$ mysql -h127.0.0.1 -P6000 -e \"show global status like 'wsrep_cluster_size';\"\n
Expected output +--------------------+-------+\n| Variable_name | Value |\n+--------------------+-------+\n| wsrep_cluster_size | 3 |\n+--------------------+-------+\n
After that you can connect to any node and perform queries, which will be automatically synchronized with other nodes. For example, to create a database on the second node, you can run the following command:
$ mysql -h127.0.0.1 -P5000 -e \"CREATE DATABASE hello_peter\"\n
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"state-snapshot-transfer.html","title":"State snapshot transfer","text":"State Snapshot Transfer (SST) is a full data copy from one node (donor) to the joining node (joiner). It\u2019s used when a new node joins the cluster. In order to be synchronized with the cluster, the new node has to receive data from a node that is already part of the cluster.
Percona XtraDB Cluster enables via xtrabackup.
Xtrabackup SST uses backup locks, which means the Galera provider is not paused at all as with earlier. The SST method can be configured using the wsrep_sst_method
variable.
Note
If the gcs.sync_donor
variable is set to Yes
(default is No
), the whole cluster will get blocked if the donor is blocked by SST.
If there are no nodes available that can safely perform incremental state transfer (IST), the cluster defaults to SST.
If there are nodes available that can perform IST, the cluster prefers a local node over remote nodes to serve as the donor.
If there are no local nodes available that can perform IST, the cluster chooses a remote node to serve as the donor.
If there are several local and remote nodes that can perform IST, the cluster chooses the node with the highest seqno
to serve as the donor.
The default SST method is xtrabackup-v2
which uses Percona XtraBackup. This is the least blocking method that leverages backup locks. XtraBackup is run locally on the donor node.
The datadir needs to be specified in the server configuration file my.cnf
, otherwise the transfer process will fail.
Detailed information on this method is provided in Percona XtraBackup SST Configuration documentation.
"},{"location":"state-snapshot-transfer.html#sst-for-tables-with-tablespaces-that-are-not-in-the-data-directory","title":"SST for tables with tablespaces that are not in the data directory","text":"For example:
CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory';\n
"},{"location":"state-snapshot-transfer.html#sst-using-percona-xtrabackup","title":"SST using Percona XtraBackup","text":"XtraBackup will restore the table to the same location on the joiner node. If the target directory does not exist, it will be created. If the target file already exists, an error will be returned, because XtraBackup cannot clear tablespaces not in the data directory.
"},{"location":"state-snapshot-transfer.html#other-reading","title":"Other reading","text":"State Snapshot Transfer Methods for MySQL
Xtrabackup SST configuration
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"strict-mode.html","title":"Percona XtraDB Cluster strict mode","text":"The Percona XtraDB Cluster (PXC) Strict Mode is designed to avoid the use of tech preview features and unsupported features in PXC. This mode performs a number of validations at startup and during runtime.
Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:
DISABLED
: Do not perform strict mode validations and run as normal.
PERMISSIVE
: If a validation fails, log a warning and continue running as normal.
ENFORCING
: If a validation fails during startup, halt the server and throw an error. If a validation fails during runtime, deny the operation and throw an error.
MASTER
: The same as ENFORCING
except that the validation of explicit table locking is not performed. This mode can be used with clusters in which write operations are isolated to a single node.
By default, PXC Strict Mode is set to ENFORCING
, except if the node is acting as a standalone server or the node is bootstrapping, then PXC Strict Mode defaults to DISABLED
.
It is recommended to keep PXC Strict Mode set to ENFORCING
, because in this case whenever Percona XtraDB Cluster encounters a tech preview feature or an unsupported operation, the server will deny it. This will force you to re-evaluate your Percona XtraDB Cluster configuration without risking the consistency of your data.
If you are planning to set PXC Strict Mode to anything else than ENFORCING
, you should be aware of the limitations and effects that this may have on data integrity. For more information, see Validations.
To set the mode, use the pxc_strict_mode
variable in the configuration file or the --pxc-strict-mode
option during mysqld
startup.
Note
It is better to start the server with the necessary mode (the default ENFORCING
is highly recommended). However, you can dynamically change it during runtime. For example, to set PXC Strict Mode to PERMISSIVE
, run the following command:
mysql> SET GLOBAL pxc_strict_mode=PERMISSIVE;\n
Note
To further ensure data consistency, it is important to have all nodes in the cluster running with the same configuration, including the value of pxc_strict_mode
variable.
PXC Strict Mode validations are designed to ensure optimal operation for common cluster setups that do not require tech preview features and do not rely on operations not supported by Percona XtraDB Cluster.
Warning
If an unsupported operation is performed on a node with pxc_strict_mode
set to DISABLED
or PERMISSIVE
, it will not be validated on nodes where it is replicated to, even if the destination node has pxc_strict_mode
set to ENFORCING
.
This section describes the purpose and consequences of each validation.
"},{"location":"strict-mode.html#group-replication","title":"Group replication","text":"Group replication is a feature of MySQL that provides distributed state machine replication with strong coordination between servers. It is implemented as a plugin which, if activated, may conflict with PXC. Group replication cannot be activated to run alongside PXC. However, you can migrate to PXC from the environment that uses group replication.
For the strict mode to work correctly, make sure that the group replication plugin is not active. In fact, if pxc_strict_mode
is set to ENFORCING or MASTER, the server will stop with an error:
Error message with pxc_strict_mode
set to ENFORCING
or MASTER
Group replication cannot be used with PXC in strict mode.\n
If pxc_strict_mode
is set to DISABLED
you can use group replication at your own risk. Setting pxc_strict_mode
to PERMISSIVE
will result in a warning.
Warning message with pxc_strict_mode
set to PERMISSIVE
Using group replication with PXC is only supported for migration. Please\nmake sure that group replication is turned off once all data is migrated to PXC.\n
"},{"location":"strict-mode.html#storage-engine","title":"Storage engine","text":"Percona XtraDB Cluster currently supports replication only for tables that use a transactional storage engine (XtraDB or InnoDB). To ensure data consistency, the following statements should not be allowed for tables that use a non-transactional storage engine (MyISAM, MEMORY, CSV, and others):
Data manipulation statements that perform writing to table (for example, INSERT
, UPDATE
, DELETE
, etc.)
The following administrative statements: CHECK
, OPTIMIZE
, REPAIR
, and ANALYZE
TRUNCATE TABLE
and ALTER TABLE
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on an unsupported table.
ENFORCING
or MASTER
At startup, no validation is performed.
At runtime, any undesirable operation performed on an unsupported table is denied and an error is logged.
Note
Unsupported tables can be converted to use a supported storage engine.
"},{"location":"strict-mode.html#myisam-replication","title":"MyISAM replication","text":"Percona XtraDB Cluster provides support for replication of tables that use the MyISAM storage engine. The use of the MyISAM storage engine in a cluster is not recommended and if you use the storage engine, this is your own risk. Due to the non-transactional nature of MyISAM, the storage engine is not fully-supported in Percona XtraDB Cluster.
MyISAM replication is controlled using the wsrep_replicate_myisam
variable, which is set to OFF
by default. Due to its unreliability, MyISAM replication should not be enabled if you want to ensure data consistency.
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, you can set wsrep_replicate_myisam
to any value.
PERMISSIVE
At startup, if wsrep_replicate_myisam
is set to ON
, a warning is logged and startup continues.
At runtime, it is permitted to change wsrep_replicate_myisam
to any value, but if you set it to ON
, a warning is logged.
ENFORCING
or MASTER
At startup, if wsrep_replicate_myisam
is set to ON
, an error is logged and startup is aborted.
At runtime, any attempt to change wsrep_replicate_myisam
to ON
fails and an error is logged.
Note
The wsrep_replicate_myisam
variable controls replication for MyISAM tables, and this validation only checks whether it is allowed. Undesirable operations for MyISAM tables are restricted using the Storage engine validation.
Percona XtraDB Cluster supports only the default row-based binary logging format. In 8.0, setting the binlog_format variable to anything but ROW
at startup or runtime is not allowed regardless of the value of the pxc_strict_mode
variable.
Percona XtraDB Cluster cannot properly propagate certain write operations to tables that do not have primary keys defined. Undesirable operations include data manipulation statements that perform writing to table (especially DELETE
).
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on a table without an explicit primary key defined.
ENFORCING
or MASTER
At startup, no validation is performed.
At runtime, any undesirable operation performed on a table without an explicit primary key is denied and an error is logged.
"},{"location":"strict-mode.html#log-output","title":"Log output","text":"Percona XtraDB Cluster does not support tables in the MySQL database as the destination for log output. By default, log entries are written to file. This validation checks the value of the log_output variable.
Depending on the selected mode, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, you can set log_output
to any value.
PERMISSIVE
At startup, if log_output
is set only to TABLE
, a warning is logged and startup continues.
At runtime, it is permitted to change log_output
to any value, but if you set it only to TABLE
, a warning is logged.
ENFORCING
or MASTER
At startup, if log_output
is set only to TABLE
, an error is logged and startup is aborted.
At runtime, any attempt to change log_output
only to TABLE
fails and an error is logged.
Percona XtraDB Cluster provides only the tech-preview-level of support for explicit table locking operations, The following undesirable operations lead to explicit table locking and are covered by this validation:
LOCK TABLES
GET_LOCK()
and RELEASE_LOCK()
FLUSH TABLES <tables> WITH READ LOCK
Setting the SERIALIZABLE
transaction level
Depending on the selected mode, the following happens:
DISABLED
or MASTER
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed.
ENFORCING
At startup, no validation is performed.
At runtime, any undesirable operation is denied and an error is logged.
"},{"location":"strict-mode.html#auto-increment-lock-mode","title":"Auto-increment lock mode","text":"The lock mode for generating auto-increment values must be interleaved to ensure that each node generates a unique (but non-sequential) identifier.
This validation checks the value of the innodb_autoinc_lock_mode variable. By default, the variable is set to 1
(consecutive lock mode), but it should be set to 2
(interleaved lock mode).
Depending on the strict mode selected, the following happens:
DISABLED
At startup, no validation is performed.
PERMISSIVE
At startup, if innodb_autoinc_lock_mode
is not set to 2
, a warning is logged and startup continues.
ENFORCING
or MASTER
At startup, if innodb_autoinc_lock_mode
is not set to 2
, an error is logged and startup is aborted.
Note
This validation is not performed during runtime, because the innodb_autoinc_lock_mode
variable cannot be set dynamically.
With strict mode set to ENFORCING
, Percona XtraDB Cluster does not support statements, because they combine both schema and data changes. Note that tables in the SELECT clause should be present on all replication nodes.
With strict mode set to PERMISSIVE
or DISABLED
, CREATE TABLE \u2026 AS SELECT (CTAS) statements are replicated using the method to ensure consistency.
In Percona XtraDB Cluster 5.7, CREATE TABLE \u2026 AS SELECT (CTAS) statements were replicated using DML write-sets when strict mode was set to PERMISSIVE
or DISABLED
.
Important
MyISAM tables are created and loaded even if wsrep_replicate_myisam
equals to 1. Percona XtraDB Cluster does not recommend using the MyISAM storage engine. The support for MyISAM may be removed in a future release.
See also
MySQL Bug System: XID inconsistency on master-slave with CTAS https://bugs.mysql.com/bug.php?id=93948
Depending on the strict mode selected, the following happens:
Mode Behavior DISABLED At startup, no validation is performed. At runtime, all operations are permitted. PERMISSIVE At startup, no validation is performed. At runtime, all operations are permitted, but a warning is logged when a CREATE TABLE \u2026 AS SELECT (CTAS) operation is performed. ENFORCING At startup, no validation is performed. At runtime, any CTAS operation is denied and an error is logged.Important
Although CREATE TABLE \u2026 AS SELECT (CTAS) operations for temporary tables are permitted even in STRICT
mode, temporary tables should not be used as source tables in CREATE TABLE \u2026 AS SELECT (CTAS) operations due to the fact that temporary tables are not present on all nodes.
If node-1
has a temporary and a non-temporary table with the same name, CREATE TABLE \u2026 AS SELECT (CTAS) on node-1
will use temporary and CREATE TABLE \u2026 AS SELECT (CTAS) on node-2
will use the non-temporary table resulting in a data level inconsistency.
DISCARD TABLESPACE
and IMPORT TABLESPACE
are not replicated using TOI. This can lead to data inconsistency if executed on only one node.
Depending on the strict mode selected, the following happens:
DISABLED
At startup, no validation is performed.
At runtime, all operations are permitted.
PERMISSIVE
At startup, no validation is performed.
At runtime, all operations are permitted, but a warning is logged when you discard or import a tablespace.
ENFORCING
At startup, no validation is performed.
At runtime, discarding or importing a tablespace is denied and an error is logged.
"},{"location":"strict-mode.html#major-version-check","title":"Major version check","text":"This validation checks that the protocol version is the same as the server major version. This validation protects the cluster against writes attempted on already upgraded nodes.
Expected outputERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of multiple major versions while accepting write workload with pxc_strict_mode = ENFORCING or MASTER\n
"},{"location":"strict-mode.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"tarball.html","title":"Install Percona XtraDB Cluster from Binary Tarball","text":"Percona provides generic tarballs with all required files and binaries for manual installation.
You can download the appropriate tarball package from https://www.percona.com/downloads/Percona-XtraDB-Cluster-80
"},{"location":"tarball.html#version-updates","title":"Version updates","text":"Starting with Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section lists only full or minimal tar files. Each tarball file replaces the multiple tar file listing used in earlier versions and supports all distributions.
Important
Starting with Percona XtraDB Cluster 8.0.21, Percona does not provide a tarball for RHEL 6/CentOS 6 (glibc2.12).
The version number in the tarball name must be substituted with the appropriate version number for your system. To indicate that such a substitution is needed in statements, we use <version-number>
.
For installations before Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section contains multiple tarballs based on the operating system names:
Percona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.bionic.tar.gz\nPercona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.buster.tar.gz\n...\n
For example, you can use curl
as follows:
$ curl -O https://downloads.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-8.0.27/binary/tarball/Percona-XtraDB-Cluster_8.0.27-18.1_Linux.x86_64.glibc2.17-minimal.tar.gz\n
Check your system to make sure the packages that the PXC version requires are installed.
"},{"location":"tarball.html#for-debian-or-ubuntu","title":"For Debian or Ubuntu:","text":"$ sudo apt-get install -y \\\nsocat libdbd-mysql-perl \\\nlibaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \\\nlibgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1\n
"},{"location":"tarball.html#for-red-hat-enterprise-linux-or-centos","title":"For Red Hat Enterprise Linux or CentOS:","text":"$ sudo yum install -y openssl socat \\\nprocps-ng chkconfig procps-ng coreutils shadow-utils \\\n
"},{"location":"tarball.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"telemetry.html","title":"Telemetry on Percona XtraDB Cluster","text":"Percona telemetry fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer to not share this information.
"},{"location":"telemetry.html#what-information-is-collected","title":"What information is collected","text":"At this time, telemetry is added only to the Percona packages and Docker images. Percona XtraDB Cluster collects only information about the installation environment. Future releases may add additional metrics.
Be assured that access to this raw data is rigorously controlled. Percona does not collect personal data. All data is anonymous and cannot be traced to a specific user. To learn more about our privacy practices, read our Percona Privacy statement.
An example of the data collected is the following:
[{\"id\" : \"c416c3ee-48cd-471c-9733-37c2886f8231\",\n\"product_family\" : \"PRODUCT_FAMILY_PXC\",\n\"instanceId\" : \"6aef422e-56a7-4530-af9d-94cc02198343\",\n\"createTime\" : \"2023-10-16T10:46:23Z\",\n\"metrics\":\n[{\"key\" : \"deployment\",\"value\" : \"PACKAGE\"},\n{\"key\" : \"pillar_version\",\"value\" : \"8.0.34-26\"},\n{\"key\" : \"OS\",\"value\" : \"Oracle Linux Server 8.8\"},\n{\"key\" : \"hardware_arch\",\"value\" : \"x86_64 x86_64\"}]}]\n
"},{"location":"telemetry.html#disable-telemetry","title":"Disable telemetry","text":"Starting with Percona XtraDB Cluster 8.0.34-26-1, telemetry is enabled by default. If you decide not to send usage data to Percona, you can set the PERCONA_TELEMETRY_DISABLE=1
environment variable for either the root user or in the operating system prior to the installation process.
Add the environment variable before the install process.
$ sudo PERCONA_TELEMETRY_DISABLE=1 apt install percona-xtradb-cluster\n
Add the environment variable before the install process.
$ sudo PERCONA_TELEMETRY_DISABLE=1 yum install percona-xtradb-cluster\n
Add the environment variable when running a command in a new container.
$ docker run -d -e MYSQL_ROOT_PASSWORD=test1234# -e PERCONA_TELEMETRY_DISABLE=1 -e CLUSTER_NAME=pxc-cluster1 --name=pxc-node1 percona/percona-xtradb-cluster:8.0\n
"},{"location":"telemetry.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"threading-model.html","title":"Percona XtraDB Cluster threading model","text":"Percona XtraDB Cluster creates a set of threads to service its operations, which are not related to existing MySQL threads. There are three main groups of threads:
"},{"location":"threading-model.html#applier-threads","title":"Applier threads","text":"Applier threads apply write-sets that the node receives from other nodes. Write messages are directed through gcv_recv_thread
.
The number of applier threads is controlled using the wsrep_slave_threads
variable or the wsrep_applier_threads
variable. The wsrep_slave_threads
variable was deprecated in the Percona XtraDB Cluster 8.0.26-16 release. The default value is 1
, which means at least one wsrep applier thread exists to process the request.
Applier threads wait for an event, and once it gets the event, it applies it using normal replica apply routine path, and relays the log info apply path with wsrep-customization. These threads are similar to replica worker threads (but not exactly the same).
Coordination is achieved using Apply and Commit Monitor. A transaction passes through two important states: APPLY
and COMMIT
. Every transaction registers itself with an apply monitor, where its apply order is defined. So all transactions with apply order sequence number (seqno
) of less than this transaction\u2019s sequence number, are applied before applying this transaction. The same is done for commit as well (last_left >= trx_.depends_seqno()
).
There is only one rollback thread to perform rollbacks in case of conflicts.
Transactions executed in parallel can conflict and may need to roll back.
Applier transactions always take priority over local transactions. This is natural, as applier transactions have been accepted by the cluster, and some of the nodes may have already applied them. Local conflicting transactions still have a window to rollback.
All the transactions that need to be rolled back are added to the rollback queue, and the rollback thread is notified. The rollback thread then iterates over the queue and performs rollback operations.
If a transaction is active on a node, and a node receives a transaction write-set from the cluster group that conflicts with the local active transaction, then such local transactions are always treated as a victim transaction to roll back.
Transactions can be in a commit state or an execution stage when the conflict arises. Local transactions in the execution stage are forcibly killed so that the waiting applier transaction is allowed to proceed. Local transactions in the commit stage fail with a certification error.
"},{"location":"threading-model.html#other-threads","title":"Other threads","text":""},{"location":"threading-model.html#service-thread","title":"Service thread","text":"This thread is created during boot-up and used to perform auxiliary services. It has two main functions:
It releases the GCache buffer after the cached write-set is purged up to the said level.
It notifies the cluster group that the respective node has committed a transaction up to this level. Each node maintains some basic status info about other nodes in the cluster. On receiving the message, the information is updated in this local metadata.
The gcs_recv_thread
thread is the first one to see all the messages received in a group.
It will try to assign actions against each message it receives. It adds these messages to a central FIFO queue, which are then processed by the Applier threads. Messages can include different operations like state change, configuration update, flow-control, and so on.
One important action is processing a write-set, which actually is applying transactions to database objects.
"},{"location":"threading-model.html#gcomm-connection-thread","title":"Gcomm connection thread","text":"The gcomm connection thread GCommConn::run_fn
is used to co-ordinate the low-level group communication activity. Think of it as a black box meant for communication.
Besides the above, some threads are created on a needed basis. SST creates threads for donor and joiner (which eventually forks out a child process to host the needed SST script), IST creates receiver and async sender threads, PageStore creates a background thread for removing the files that were created.
If the checksum is enabled and the replicated write-set is big enough, the checksum is done as part of a separate thread.
"},{"location":"threading-model.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"trademark-policy.html","title":"Trademark policy","text":"This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.
Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.
Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.
First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.
Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.
Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.
Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.
Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.
In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.
In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.
"},{"location":"trademark-policy.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"upgrade-from-backup.html","title":"Restore a 5.7 backup to an 8.0 cluster","text":"Use Percona XtraBackup to back up the source server data and restore the data to a target server, and then upgrade the server to a different version of Percona XtraDB Cluster.
Downgrading is not supported.
"},{"location":"upgrade-from-backup.html#restore-a-database-with-a-different-server-version","title":"Restore a database with a different server version","text":"Review Upgrade Percona XtraDB cluster.
Upgrade the nodes one at a time. The primary node should be the last node to be upgraded. The following steps are required on each node.
Back up the data on the source server.
Install the same database version as the source server on the target server.
Restore with a copy-back
operation on the target server.
Start the database server on the target server.
Do a slow shutdown of the database server with the SET GLOBAL innodb_fast_shutdown=0
statement. This shutdown type flushes InnoDB operations before completing and may take longer.
Install the new database server version on the target server.
Start the new database server version on the restored data directory.
Perform any other upgrade steps as necessary.
To ensure the upgrade was successful, check the data.
"},{"location":"upgrade-from-backup.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"upgrade-guide.html","title":"Upgrade Percona XtraDB Cluster","text":"The following documents contain details about relevant changes in the 8.0 series of MySQL and Percona Server for MySQL. Make sure you deal with any incompatible features and variables mentioned in these documents when upgrading to Percona XtraDB Cluster 8.0.
Upgrading MySQL
Upgrading from MySQL 5.7 to 8.0
The pxc_encrypt_cluster_traffic
variable, which enables traffic encryption, is set to ON
by default in Percona XtraDB Cluster 8.0.
Unless you configure a node accordingly (each node in your cluster must use the same SSL certificates) or try to join a cluster running PXC 5.7 which unencrypted cluster traffic, the node will not be able to join resulting in an error.
The error message... [ERROR] ... [Galera] handshake with remote endpoint ...\nThis error is often caused by SSL issues. ...\n
See also
sections Encrypting PXC Traffic, Configuring Nodes for Write-Set Replication
"},{"location":"upgrade-guide.html#not-recommended-to-mix-pxc-57-nodes-with-pxc-80-nodes","title":"Not recommended to mix PXC 5.7 nodes with PXC 8.0 nodes","text":"Shut down the cluster and upgrade each node to PXC 8.0. It is important that you make backups before attempting an upgrade.
"},{"location":"upgrade-guide.html#pxc-strict-mode-is-enabled-by-default","title":"PXC strict mode is enabled by default","text":"Percona XtraDB Cluster in 8.0 runs with PXC Strict Mode enabled by default. This will deny any unsupported operations and may halt the server if a strict mode validation fails. It is recommended to first start the node with the pxc_strict_mode
variable set to PERMISSIVE
in the MySQL configuration file.
All configuration settings are stored in the default MySQL configuration file:
Path on Debian and Ubuntu: /etc/mysql/mysql.conf.d/mysqld.cnf
Path on Red Hat and CentOS: /etc/my.cnf
After you check the log for any tech preview features or unsupported features and you have fixed any of the encountered incompatibilities, set the variable back to ENFORCING
at run time:
mysql> SET pxc_strict_mode=ENFORCING;\n
Restarting the node with the updated configuration file also sets variable to ENFORCING
.
All configuration settings are stored in the default MySQL configuration file:
Path on Debian and Ubuntu: /etc/mysql/mysql.conf.d/mysqld.cnf
Path on Red Hat and CentOS: /etc/my.cnf
Before you start the upgrade, move your custom settings from /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf
(on Debian and Ubuntu) or from /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
(on Red Hat and CentOS) to the new location accordingly.
Note
If you have moved your my.cnf file to a different location and added a symlink to /etc/my.cnf
, the RPM package manager, when upgrading, can delete the symlink and put a default my.cnf file in /etc/.
In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password
. The ProxySQL option \u2013syncusers will not work if the Percona XtraDB Cluster user is created using caching_sha2_password
. Use the mysql_native_password
authentication plugin in these cases.
Be sure you are running on the latest 5.7 version before you upgrade to 8.0.
"},{"location":"upgrade-guide.html#mysql_upgrade-is-part-of-sst","title":"mysql_upgrade is part of SST","text":"mysql_upgrade is now run automatically as part of SST. You do not have to run it manually when upgrading your system from an older version.
"},{"location":"upgrade-guide.html#major-upgrade-scenarios","title":"Major upgrade scenarios","text":"Upgrading PXC from 5.7 to 8.0 may have slightly different strategies depending on the configuration and workload on your PXC cluster.
Note that the new default value of pxc-encrypt-cluster-traffic
(set to ON versus OFF in PXC 5.7) requires additional care. You cannot join a 5.7 node to a PXC 8.0 cluster unless the node has traffic encryption enabled as the cluster may not have some nodes with traffic encryption enabled and some nodes with traffic encryption disabled. For more information, see Traffic encryption is enabled by default.
If there is no active parallel workload or the cluster has read-only workload while upgrading the nodes, complete the following procedure for each node in the cluster:
Shutdown one of the node 5.7 cluster nodes.
Remove 5.7 PXC packages without removing the data-directory.
Install PXC 8.0 packages.
Restart the mysqld service.
Important
Before upgrading, make sure your application can work with a reduced cluster size. If the cluster operates with an even number of nodes, the cluster may have split-brain.
This upgrade flow auto-detects the presence of the 5.7 data directory and trigger the upgrade as part of the node bootup process. The data directory is upgraded to be compatible with PXC 8.0. Then the node joins the cluster and enters synced state. The 3-node cluster is restored with 2 nodes running PXC 5.7 and 1 node running PXC 8.0.
Note
Since SST is not involved, SST based auto-upgrade flow is not started.
PXC 8.0 uses Galera 4 while PXC 5.7 uses Galera-3. The cluster will continue to use the protocol version 3 used in Galera 3 effectively limiting some of the functionality. With all nodes upgraded to version 8.0, protocol version 4 is applied.
Tip
The protocol version is stored in the protocol_version
column of the wsrep_cluster
table.
mysql> USE mysql;\n
mysql> SELECT protocol_version from wsrep_cluster;\n
The example of the output is the following:
+------------------+\n| protocol_version |\n+------------------+\n| 4 |\n+------------------+\n1 row in set (0.00 sec)\n
As soon as the last 5.7 node shuts down, the configuration of the remaining two nodes is updated to use protocol version 4. A new upgraded node will then join using protocol version 4 and the whole cluster will maintain protocol version 4 enabling the support for additional Galera 4 facilities.
It may take longer to join the last upgraded node since it will invite IST to obtain the configuration changes.
Note
Starting from Galera 4, the configuration changes are cached to gcache
and the configuration changes are donated as part of IST or SST to help build the certification queue on the JOINING node. As other nodes (say n2 and n3), already using protocol version 4, donate the configuration changes when the JOINER node is booted.
The situation was different for the previous and penultimate nodes since the donation of the configuration changes is not supported by protocol version 3 that they used.
With IST involved on joining the last node, the smart IST flow is triggered to take care of the upgrade even before MySQL starts to look at the data directory.
Important
It is not recommended to restart the last node without upgrading it.
"},{"location":"upgrade-guide.html#scenario-upgrade-from-pxc-56-to-pxc-80","title":"Scenario: Upgrade from PXC 5.6 to PXC 8.0","text":"First, upgrade PXC from 5.6 to the latest version of PXC 5.7. Then proceed with the upgrade using the procedure described in Scenario: No active parallel workload or with read-only workload.
"},{"location":"upgrade-guide.html#minor-upgrade","title":"Minor upgrade","text":"To upgrade the cluster, follow these steps for each node:
Make sure that all nodes are synchronized.
Stop the mysql
service:
$ sudo service mysql stop\n
Upgrade Percona XtraDB Cluster and Percona XtraBackup packages. For more information, see Installing Percona XtraDB Cluster.
Back up grastate.dat
, so that you can restore it if it is corrupted or zeroed out due to network issue.
Now, start the cluster node with 8.0 packages installed, PXC will upgrade the data directory as needed - either as part of the startup process or a state transfer (IST/SST).
In most cases, starting the mysql
service should run the node with your previous configuration. For more information, see Adding Nodes to Cluster.
$ sudo service mysql start\n
Note
On CentOS, the /etc/my.cnf configuration file is renamed to my.cnf.rpmsave
. Make sure to rename it back before joining the upgraded node back to the cluster.
PXC Strict Mode is enabled by default, which may result in denying any unsupported operations and may halt the server. For more information, see pxc-strict-mode is enabled by default.
pxc-encrypt-cluster-traffic
is enabled by default. You need to configure each node accordingly and avoid joining a cluster with unencrypted cluster traffic. For more information, see Traffic encryption is enabled by default.
Repeat this procedure for the next node in the cluster until you upgrade all nodes.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"verify-replication.html","title":"Verify replication","text":"Use the following procedure to verify replication by creating a new database on the second node, creating a table for that database on the third node, and adding some records to the table on the first node.
Create a new database on the second node:
mysql@pxc2> CREATE DATABASE percona;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Switch to a newly created database:
mysql@pxc3> USE percona;\n
The following output confirms that a database has been changed:
Expected outputDatabase changed\n
Create a table on the third node:
mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n
The following output confirms that a table has been created:
Expected outputQuery OK, 0 rows affected (0.05 sec)\n
Insert records on the first node:
mysql@pxc1> INSERT INTO percona.example VALUES (1, 'percona1');\n
The following output confirms that the records have been inserted:
Expected outputQuery OK, 1 row affected (0.02 sec)\n
Retrieve rows from that table on the second node:
mysql@pxc2> SELECT * FROM percona.example;\n
The following output confirms that all the rows have been retrieved:
Expected output+---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n| 1 | percona1 |\n+---------+-----------+\n1 row in set (0.00 sec)\n
Consider installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries. This is the recommended high-availability solution for Percona XtraDB Cluster. For more information, see Load balancing with ProxySQL.
Percona Monitoring and Management is the best choice for managing and monitoring Percona XtraDB Cluster performance. It provides visibility for the cluster and enables efficient troubleshooting.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"virtual-sandbox.html","title":"Set up a testing environment with ProxySQL","text":"This section describes how to set up Percona XtraDB Cluster in a virtualized testing environment based on ProxySQL. To test the cluster, we will use the sysbench benchmark tool.
It is assumed that each PXC node is installed on Amazon EC2 micro instances running CentOS 7. However, the information in this section should apply if you used another virtualization technology (for example, VirtualBox) with any Linux distribution.
Each of the tree Percona XtraDB Cluster nodes is installed on a separate virtual machine. One more virtual machine has ProxySQL, which redirects requests to the nodes.
Tip
Running ProxySQL on an application server, instead of having it as a dedicated entity, removes the unnecessary extra network roundtrip, because the load balancing layer in Percona XtraDB Cluster scales well with application servers.
Install Percona XtraDB Cluster on three cluster nodes, as described in Configuring Percona XtraDB Cluster on CentOS.
On the client node, install ProxySQL and sysbench
:
$ yum -y install proxysql2 sysbench\n
When all cluster nodes are started, configure ProxySQL using the admin interface.
Tip
To connect to the ProxySQL admin interface, you need a mysql
client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql
client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally.
To connect to the admin interface, use the credentials, host name and port specified in the global variables.
Warning
Do not use default credentials in production!
The following example shows how to connect to the ProxySQL admin interface with default credentials (assuming that ProxySQL IP is 192.168.70.74):
root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n
To see the ProxySQL databases and tables use the SHOW DATABASES
and SHOW TABLES
commands:
mysql> SHOW DATABASES;\n
The following output shows the list of the ProxySQL databases:
Expected output+-----+---------------+-------------------------------------+\n| seq | name | file |\n+-----+---------------+-------------------------------------+\n| 0 | main | |\n| 2 | disk | /var/lib/proxysql/proxysql.db |\n| 3 | stats | |\n| 4 | monitor | |\n| 5 | stats_monitor | /var/lib/proxysql/proxysql_stats.db |\n+-----+---------------+-------------------------------------+\n5 rows in set (0.00 sec)\n
mysql> SHOW TABLES;\n
The following output shows the list of tables:
Expected output+----------------------------------------------------+\n| tables |\n+----------------------------------------------------+\n| global_variables |\n| mysql_aws_aurora_hostgroups |\n| mysql_collations |\n| mysql_firewall_whitelist_rules |\n| mysql_firewall_whitelist_sqli_fingerprints |\n| mysql_firewall_whitelist_users |\n| mysql_galera_hostgroups |\n| mysql_group_replication_hostgroups |\n| mysql_query_rules |\n| mysql_query_rules_fast_routing |\n| mysql_replication_hostgroups |\n| mysql_servers |\n| mysql_users |\n| proxysql_servers |\n| restapi_routes |\n| runtime_checksums_values |\n| runtime_global_variables |\n| runtime_mysql_aws_aurora_hostgroups |\n| runtime_mysql_firewall_whitelist_rules |\n| runtime_mysql_firewall_whitelist_sqli_fingerprints |\n| runtime_mysql_firewall_whitelist_users |\n| runtime_mysql_galera_hostgroups |\n| runtime_mysql_group_replication_hostgroups |\n| runtime_mysql_query_rules |\n| runtime_mysql_query_rules_fast_routing |\n| runtime_mysql_replication_hostgroups |\n| runtime_mysql_servers |\n| runtime_mysql_users |\n| runtime_proxysql_servers |\n| runtime_restapi_routes |\n| runtime_scheduler |\n| scheduler |\n+----------------------------------------------------+\n32 rows in set (0.00 sec)\n
For more information about admin databases and tables, see Admin Tables
Note
ProxySQL has 3 areas where the configuration can reside:
MEMORY (your current working place)
RUNTIME (the production settings)
DISK (durable configuration, saved inside an SQLITE database)
When you change a parameter, you change it in MEMORY area. That is done by design to allow you to test the changes before pushing to production (RUNTIME), or saving them to disk.
To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.71',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.72',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.73',10,3306,1000);\n
ProxySQL v2.0 supports PXC natlively. It uses the concept of hostgroups (see the value of hostgroup_id in the mysql_servers table) to group cluster nodes to balance the load in a cluster by routing different types of traffic to different groups.
This information is stored in the [runtime_]mysql_galera_hostgroups table.
Columns of the [runtime_]mysql_galera_hostgroups
table
1
(Yes) to inidicate that this configuration should be used; 0
(No) - otherwise max_writers The maximum number of WRITER nodes that must operate simultaneously. For most cases, a reasonable value is 1
. The value in this column may not exceed the total number of nodes. writer_is_also_reader 1
(Yes) to keep the given node in both reader_hostgroup
and writer_hostgroup
. 0
(No) to remove the given node from reader_hostgroup
if it already belongs to writer_hostgroup
. max_transactions_behind As soon as the value of :variable:wsrep_local_recv_queue
exceeds the number stored in this column the given node is set to OFFLINE
. Set the value carefully based on the behaviour of the node. comment Helpful extra information about the given node Make sure that the variable mysql-server_version refers to the correct version. For Percona XtraDB Cluster 8.0, set it to 8.0 accordingly:
mysql> UPDATE GLOBAL_VARIABLES\nSET variable_value='8.0'\nWHERE variable_name='mysql-server_version';\n\nmysql> LOAD MYSQL SERVERS TO RUNTIME;\nmysql> SAVE MYSQL SERVERS TO DISK;\n
See also
Percona Blogpost: ProxySQL Native Support for Percona XtraDB Cluster (PXC) https://www.percona.com/blog/2019/02/20/proxysql-native-support-for-percona-xtradb-cluster-pxc/
Given the nodes from the mysql_servers table, you may set up the hostgroups as follows:
mysql> INSERT INTO mysql_galera_hostgroups (\nwriter_hostgroup, backup_writer_hostgroup, reader_hostgroup,\noffline_hostgroup, active, max_writers, writer_is_also_reader,\nmax_transactions_behind)\nVALUES (10, 12, 11, 13, 1, 1, 2, 100);\n
This command configures ProxySQL as follows:
WRITER hostgroup
hostgroup `10`\n
READER hostgroup
hostgroup `11`\n
BACKUP WRITER hostgroup
hostgroup `12`\n
OFFLINE hostgroup
hostgroup `13`\n
Set up ProxySQL query rules for read/write split using the mysql_query_rules table:
mysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',10,1,'^SELECT.*FOR UPDATE',1);\n\nmysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',11,1,'^SELECT ',1);\n\nmysql> LOAD MYSQL QUERY RULES TO RUNTIME;\nmysql> SAVE MYSQL QUERY RULES TO DISK;\n\nmysql> select hostgroup_id,hostname,port,status,weight from runtime_mysql_servers;\n
Expected output +--------------+----------------+------+--------+--------+\n| hostgroup_id | hostname | port | status | weight |\n+--------------+----------------+------+--------+--------+\n| 10 | 192.168.70.73 | 3306 | ONLINE | 1000 |\n| 11 | 192.168.70.72 | 3306 | ONLINE | 1000 |\n| 11 | 192.168.70.71 | 3306 | ONLINE | 1000 |\n| 12 | 192.168.70.72 | 3306 | ONLINE | 1000 |\n| 12 | 192.168.70.71 | 3306 | ONLINE | 1000 |\n+--------------+----------------+------+--------+--------+\n5 rows in set (0.00 sec)\n
See also
ProxySQL Blog: MySQL read/write split with ProxySQL https://proxysql.com/blog/configure-read-write-split/ ProxySQL Documentation: mysql_query_rules
table https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules
Notice that all servers were inserted into the mysql_servers table with the READER hostgroup set to 10 (see the value of the hostgroup_id column):
mysql> SELECT * FROM mysql_servers;\n
Expected output +--------------+---------------+------+--------+ +---------+\n| hostgroup_id | hostname | port | weight | ... | comment |\n+--------------+---------------+------+--------+ +---------+\n| 10 | 192.168.70.71 | 3306 | 1000 | | |\n| 10 | 192.168.70.72 | 3306 | 1000 | | |\n| 10 | 192.168.70.73 | 3306 | 1000 | | |\n+--------------+---------------+------+--------+ +---------+\n3 rows in set (0.00 sec)\n
This configuration implies that ProxySQL elects the writer automatically. If the elected writer goes offline, ProxySQL assigns another (failover). You might tweak this mechanism by assigning a higher weight to a selected node. ProxySQL directs all write requests to this node. However, it also becomes the mostly utilized node for reading requests. In case of a failback (a node is put back online), the node with the highest weight is automatically elected for write requests.
"},{"location":"virtual-sandbox.html#creating-a-proxysql-monitoring-user","title":"Creating a ProxySQL monitoring user","text":"To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE
privilege on any node in the cluster and configure the user in ProxySQL.
The following example shows how to add a monitoring user on Node 2:
mysql> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password BY 'ProxySQLPa55';\nmysql> GRANT USAGE ON *.* TO 'proxysql'@'%';\n
The following example shows how to configure this user on the ProxySQL node:
mysql> UPDATE global_variables SET variable_value='proxysql'\nWHERE variable_name='mysql-monitor_username';\n\nmysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\nWHERE variable_name='mysql-monitor_password';\n
"},{"location":"virtual-sandbox.html#saving-and-loading-the-configuration","title":"Saving and loading the configuration","text":"To load this configuration at runtime, issue the LOAD
command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue the SAVE
command.
mysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql> SAVE MYSQL VARIABLES TO DISK;\n
To ensure that monitoring is enabled, check the monitoring logs:
mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+----------------------+---------------+\n| hostname | port | time_start_us | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695 | NULL |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779 | NULL |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627 | NULL |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557 | NULL |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737 | NULL |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447 | NULL |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
Expected output +---------------+------+------------------+-------------------+------------+\n| hostname | port | time_start_us | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948 | NULL |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803 | NULL |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711 | NULL |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783 | NULL |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631 | NULL |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542 | NULL |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n
The previous examples show that ProxySQL is able to connect and ping the nodes you added.
To enable monitoring of these nodes, load them at runtime:
mysql> LOAD MYSQL SERVERS TO RUNTIME;\n
"},{"location":"virtual-sandbox.html#creating-proxysql-client-user","title":"Creating ProxySQL Client User","text":"ProxySQL must have users that can access backend nodes to manage connections.
To add a user, insert credentials into mysql_users
table:
mysql> INSERT INTO mysql_users (username,password) VALUES ('appuser','$3kRetp@$sW0rd');\n
The example of the output is the following:
Expected outputQuery OK, 1 row affected (0.00 sec)\n
Note
ProxySQL currently doesn\u2019t encrypt passwords.
See also
More information about password encryption in ProxySQL
Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):
mysql> LOAD MYSQL USERS TO RUNTIME;\nmysql> SAVE MYSQL USERS TO DISK;\n
To confirm that the user has been set up correctly, you can try to log in:
root@proxysql:~# mysql -u appuser -p$3kRetp@$sW0rd -h 127.0.0.1 -P 6033\n
Expected output Welcome to the MySQL monitor. Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n
To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:
mysql> CREATE USER 'appuser'@'192.168.70.74'\nIDENTIFIED WITH mysql_native_password by '$3kRetp@$sW0rd';\n\nmysql> GRANT ALL ON *.* TO 'appuser'@'192.168.70.74';\n
"},{"location":"virtual-sandbox.html#testing-the-cluster-with-the-sysbench-benchmark-tool","title":"Testing the cluster with the sysbench benchmark tool","text":"After you set up Percona XtraDB Cluster in your testing environment, you can test it using the sysbench
benchmarking tool.
Create a database (sysbenchdb in this example; you can use a different name):
mysql> CREATE DATABASE sysbenchdb;\n
The following output confirms that a new database has been created:
Expected outputQuery OK, 1 row affected (0.01 sec)\n
Populate the table with data for the benchmark. Note that you should pass the database you have created as the value of the --mysql-db
parameter, and the name of the user who has full access to this database as the value of the --mysql-user
parameter:
$ sysbench /usr/share/sysbench/oltp_insert.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--table-size=1000 prepare\n
Run the benchmark on port 6033:
$ sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--skip-trx=true --table-size=1000 --time=100 --report-interval=10 run\n
Related sections and additional reading
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-files-index.html","title":"Index of files created by PXC","text":"GRA_\\*.log
These files contain binlog events in ROW format representing the failed transaction. That means that the replica thread was not able to apply one of the transactions. For each of those file, a corresponding warning or error message is present in the mysql error log file. Those error can also be false positives like a bad DDL
statement (dropping a table that doesn\u2019t exists for example) and therefore nothing to worry about. However it\u2019s always recommended to check these log to understand what\u2019s is happening.
To be able to analyze these files binlog header needs to be added to the log file. To create the GRA_HEADER
file you need an instance running with binlog_checksum
set to NONE
and extract first 120 bytes from the binlog file:
$ head -c 123 mysqld-bin.000001 > GRA_HEADER\n$ cat GRA_HEADER > /var/lib/mysql/GRA_1_2-bin.log\n$ cat /var/lib/mysql/GRA_1_2.log >> /var/lib/mysql/GRA_1_2-bin.log\n$ mysqlbinlog -vvv /var/lib/mysql/GRA_1_2-bin.log\n\n/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;\n/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;\nDELIMITER /*!*/;\n# at 4\n#160809 16:04:05 server id 3 end_log_pos 123 Start: binlog v 4, server v 8.0-log created 160809 16:04:05 at startup\n# Warning: this binlog is either in use or was not closed properly.\nROLLBACK/*!*/;\nBINLOG '\nnbGpVw8DAAAAdwAAAHsAAAABAAQANS43LjEyLTVyYzEtbG9nAAAAAAAAAAAAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAAACdsalXEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQA\nALfQ8hw=\n'/*!*/;\n# at 123\n#160809 16:05:49 server id 2 end_log_pos 75 Query thread_id=11 exec_time=0 error_code=0\nuse `test`/*!*/;\nSET TIMESTAMP=1470738949/*!*/;\nSET @@session.pseudo_thread_id=11/*!*/;\nSET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;\nSET @@session.sql_mode=1436549152/*!*/;\nSET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;\n/*!\\C utf8 *//*!*/;\nSET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;\nSET @@session.lc_time_names=0/*!*/;\nSET @@session.collation_database=DEFAULT/*!*/;\ndrop table t\n/*!*/;\nSET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;\nDELIMITER ;\n# End of log file\n/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;\n/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;\n
This information can be used for checking the MySQL error log for the corresponding error message.
Error message160805 9:33:37 8:52:21 [ERROR] Slave SQL: Error 'Unknown table 'test'' on query. Default database: 'test'. Query: 'drop table test', Error_code: 1051\n160805 9:33:37 8:52:21 [Warning] WSREP: RBR event 1 Query apply warning: 1, 3\n
In this example DROP TABLE
statement was executed on a table that doesn\u2019t exist.
gcache.page
See gcache.page_size
See also
Percona Database Performance Blog: All You Need to Know About GCache (Galera-Cache) https://www.percona.com/blog/2016/11/16/all-you-need-to-know-about-gcache-galera-cache/
galera.cache
This file is used as a main writeset store. It\u2019s implemented as a permanent ring-buffer file that is preallocated on disk when the node is initialized. File size can be controlled with the variable gcache.size
. If this value is bigger, more writesets are cached and chances are better that the re-joining node will get IST instead of SST. Filename can be changed with the gcache.name
variable.
grastate.dat
This file contains the Galera state information.
version
- grastate version
uuid
- a unique identifier for the state and the sequence of changes it undergoes.For more information on how UUID is generated see UUID.
seqno
- Ordinal Sequence Number, a 64-bit signed integer used to denote the position of the change in the sequence. seqno
is 0
when no writesets have been generated or applied on that node, i.e., not applied/generated across the lifetime of a grastate
file. -1
is a special value for the seqno
that is kept in the grastate.dat
while the server is running to allow Galera to distinguish between a clean and an unclean shutdown. Upon a clean shutdown, the correct seqno
value is written to the file. So, when the server is brought back up, if the value is still -1
, this means that the server did not shut down cleanly. If the value is greater than 0
, this means that the shutdown was clean. -1
is then written again to the file in order to allow the server to correctly detect if the next shutdown was clean in the same manner.
cert_index
- cert index restore through grastate is not implemented yet
Examples of this file look like this:
In case server node has this state when not running it means that that node crashed during the transaction processing.
# GALERA saved state\nversion: 2.1\nuuid: 1917033b-7081-11e2-0800-707f5d3b106b\nseqno: -1\ncert_index:\n
In case server node has this state when not running it means that the node was gracefully shut down.
# GALERA saved state\nversion: 2.1\nuuid: 1917033b-7081-11e2-0800-707f5d3b106b\nseqno: 5192193423942\ncert_index:\n
In case server node has this state when not running it means that the node crashed during the DDL.
# GALERA saved state\nversion: 2.1\nuuid: 00000000-0000-0000-0000-000000000000\nseqno: -1\ncert_index:\n
gvwstate.dat
This file is used for Primary Component recovery feature. This file is created once primary component is formed or changed, so you can get the latest primary component this node was in. And this file is deleted when the node is shutdown gracefully.
First part contains the node UUID information. Second part contains the view information. View information is written between #vwbeg
and #vwend
. View information consists of:
* view_id: [view_type] [view_uuid] [view_seq]. - `view_type` is always `3` which means primary view. `view_uuid` and `view_seq` identifies a unique view, which could be perceived as identifier of this primary component.\n\n* bootstrap: [bootstarp_or_not]. - it could be `0` or `1`, but it does not affect primary component recovery process now.\n\n* member: [node\u2019s uuid] [node\u2019s segment]. - it represents all nodes in this primary component.\n\n??? example \"Example of the file\"\n\n ```{.text .no-copy}\n my_uuid: c5d5d990-30ee-11e4-aab1-46d0ed84b408\n #vwbeg\n view_id: 3 bc85bd53-31ac-11e4-9895-1f2ce13f2542 2 \n bootstrap: 0\n member: bc85bd53-31ac-11e4-9895-1f2ce13f2542 0\n member: c5d5d990-30ee-11e4-aab1-46d0ed84b408 0\n #vwend\n ```\n
"},{"location":"wsrep-files-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-provider-index.html","title":"Index of wsrep_provider options","text":"The following variables can be set and checked in the wsrep_provider_options
variable. The value of the variable can be changed in the MySQL configuration file, my.cnf
, or by setting the variable value in the MySQL client.
To change the value in my.cnf
, the following syntax should be used:
$ wsrep_provider_options=\"variable1=value1;[variable2=value2]\"\n
For example to set the size of the Galera buffer storage to 512 MB, specify the following in my.cnf
:
$ wsrep_provider_options=\"gcache.size=512M\"\n
Dynamic variables can be changed from the MySQL client using the SET GLOBAL
command. For example, to change the value of the pc.ignore_sb
, use the following command:
mysql> SET GLOBAL wsrep_provider_options=\"pc.ignore_sb=true\";\n
"},{"location":"wsrep-provider-index.html#index","title":"Index","text":""},{"location":"wsrep-provider-index.html#base_dir","title":"base_dir
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of datadir
This variable specifies the data directory.
"},{"location":"wsrep-provider-index.html#base_host","title":"base_host
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address
This variable sets the value of the node\u2019s base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
"},{"location":"wsrep-provider-index.html#base_port","title":"base_port
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 4567 This variable sets the port on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.
"},{"location":"wsrep-provider-index.html#certlog_conflicts","title":"cert.log_conflicts
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: no This variable is used to specify if the details of the certification failures should be logged.
"},{"location":"wsrep-provider-index.html#certoptimistic_pa","title":"cert.optimistic_pa
","text":"Enabled
Allows the full range of parallelization as determined by the certification\nalgorithm.\n
Disabled
Limits the parallel applying window so that it does not exceed the parallel\napplying window seen on the source. In this case, the action starts applying\nno sooner than all actions on the source are committed.\n
Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: No See also
Galera Cluster Documentation: * Parameter: cert.optimistic_pa * Setting parallel slave threads
"},{"location":"wsrep-provider-index.html#debug","title":"debug
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: no When this variable is set to yes
, it will enable debugging.
evs.auto_evict
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0 Number of entries allowed on delayed list until auto eviction takes place. Setting value to 0
disables auto eviction protocol on the node, though node response times will still be monitored. EVS protocol version (evs.version
) 1
is required to enable auto eviction.
evs.causal_keepalive_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of evs.keepalive_period
This variable is used for development purposes and shouldn\u2019t be used by regular users.
"},{"location":"wsrep-provider-index.html#evsdebug_log_mask","title":"evs.debug_log_mask
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0x1 This variable is used for EVS (Extended Virtual Synchrony) debugging. It can be used only when wsrep_debug
is set to ON
.
evs.delay_margin
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT1S Time period that a node can delay its response from expected until it is added to delayed list. The value must be higher than the highest RTT between nodes.
"},{"location":"wsrep-provider-index.html#evsdelayed_keep_period","title":"evs.delayed_keep_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S Time period that node is required to remain responsive until one entry is removed from delayed list.
"},{"location":"wsrep-provider-index.html#evsevict","title":"evs.evict
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Manual eviction can be triggered by setting the evs.evict
to a certain node value. Setting the evs.evict
to an empty string will clear the evict list on the node where it was set.
evs.inactive_check_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT0.5S This variable defines how often to check for peer inactivity.
"},{"location":"wsrep-provider-index.html#evsinactive_timeout","title":"evs.inactive_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT15S This variable defines the inactivity limit, once this limit is reached the node will be considered dead.
"},{"location":"wsrep-provider-index.html#evsinfo_log_mask","title":"evs.info_log_mask
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable is used for controlling the extra EVS info logging.
"},{"location":"wsrep-provider-index.html#evsinstall_timeout","title":"evs.install_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT7.5S This variable defines the timeout on waiting for install message acknowledgments.
"},{"location":"wsrep-provider-index.html#evsjoin_retrans_period","title":"evs.join_retrans_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S This variable defines how often to retransmit EVS join messages when forming cluster membership.
"},{"location":"wsrep-provider-index.html#evskeepalive_period","title":"evs.keepalive_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S This variable defines how often to emit keepalive beacons (in the absence of any other traffic).
"},{"location":"wsrep-provider-index.html#evsmax_install_timeouts","title":"evs.max_install_timeouts
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1 This variable defines how many membership install rounds to try before giving up (total rounds will be evs.max_install_timeouts
+ 2).
evs.send_window
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 10 This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example,512). The value must not be less than evs.user_send_window
.
evs.stats_report_period
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1M This variable defines the control period of EVS statistics reporting.
"},{"location":"wsrep-provider-index.html#evssuspect_timeout","title":"evs.suspect_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S This variable defines the inactivity period after which the node is \u201csuspected\u201d to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before evs.inactive_timeout
is reached.
evs.use_aggregate
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true When this variable is enabled, smaller packets will be aggregated into one.
"},{"location":"wsrep-provider-index.html#evsuser_send_window","title":"evs.user_send_window
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 4 This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example, 512).
"},{"location":"wsrep-provider-index.html#evsversion","title":"evs.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable defines the EVS protocol version. Auto eviction is enabled when this variable is set to 1
. Default 0
is set for backwards compatibility.
evs.view_forget_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: P1D This variable defines the timeout after which past views will be dropped from history.
"},{"location":"wsrep-provider-index.html#gcachedir","title":"gcache.dir
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: datadir
This variable can be used to define the location of the galera.cache
file.
gcache.freeze_purge_at_seqno
","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0 This variable controls the purging of the gcache and enables retaining more data in it. This variable makes it possible to use IST (Incremental State Transfer) when the node rejoins instead of SST (State Snapshot Transfer).
Set this variable on an existing node of the cluster (that will continue to be part of the cluster and can act as a potential donor node). This node continues to retain the write-sets and allows restarting the node to rejoin by using IST.
See also
Percona Database Performance Blog:
All You Need to Know About GCache (Galera-Cache)
Want IST Not SST for Node Rejoins? We Have a Solution!
The gcache.freeze_purge_at_seqno
variable takes three values:
-1 (default)
No freezing of gcache, the purge operates as normal.
A valid seqno in gcache
The freeze purge of write-sets may not be smaller than the selected seqno. The best way to select an optimal value is to use the value of the variable :variable:wsrep_last_applied
from the node that you plan to shut down.
now The freeze purge of write-sets is no less than the smallest seqno currently in gcache. Using this value results in freezing the gcache-purge instantly. Use this value if selecting a valid seqno in gcache is difficult.
"},{"location":"wsrep-provider-index.html#gcachekeep_pages_count","title":"gcache.keep_pages_count
","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0 This variable is used to limit the number of overflow pages rather than the total memory occupied by all overflow pages. Whenever gcache.keep_pages_count
is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest).
Whenever either the gcache.keep_pages_count
or the gcache.keep_pages_size
variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.
gcache.keep_pages_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: No Default Value: 0 This variable is used to limit the total size of overflow pages rather than the count of all overflow pages. Whenever gcache.keep_pages_size
is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest) until the total size is below the specified value.
Whenever either the gcache.keep_pages_count
or the gcache.keep_pages_size
variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.
gcache.mem_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable has been deprecated in 5.6.22-25.8
and shouldn\u2019t be used as it could cause a node to crash.
This variable was used to define how much RAM is available for the system.
"},{"location":"wsrep-provider-index.html#gcachename","title":"gcache.name
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql/galera.cache This variable can be used to specify the name of the Galera cache file.
"},{"location":"wsrep-provider-index.html#gcachepage_size","title":"gcache.page_size
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: 128M Size of the page files in page storage. The limit on overall page storage is the size of the disk. Pages are prefixed by gcache.page.
See also
Galera Documentation: gcache.page_size
Percona Database Performance Blog: All You Need to Know About GCache
gcache.recover
","text":"Option Description Command line: No Configuration file: Yes Scope: Global Dynamic: No Default value: No Attempts to recover a node\u2019s gcache file to a usable state on startup. If the node can successfully recover the gcache file, the node can provide IST to the remaining nodes. This ability can reduce the time needed to bring up the cluster.
An example of enabling the variable in the configuration file:
wsrep_provider_options=\"gcache.recover=yes\"\n
"},{"location":"wsrep-provider-index.html#gcachesize","title":"gcache.size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 128M Size of the transaction cache for Galera replication. This defines the size of the galera.cache
file which is used as source for IST. The bigger the value of this variable, the better are chances that the re-joining node will get IST instead of SST.
gcomm.thread_prio
","text":"Using this option, you can raise the priority of the gcomm thread to a higher level than it normally uses.
The format for this variable is: <policy>:<priority>. The priority value is an integer.
other
Default time-sharing scheduling in Linux. The threads can run\nuntil blocked by an I/O request or preempted by higher priorities or\nsuperior scheduling designations.\n
fifo
First-in First-out (FIFO) scheduling. These threads always immediately\npreempt any currently running other, batch or idle threads. They can run\nuntil they are either blocked by an I/O request or preempted by a FIFO thread\nof a higher priority.\n
rr
Round-robin scheduling. These threads always preempt any currently running\nother, batch or idle threads. The scheduler allows these threads to run for a\nfixed period of a time. If the thread is still running when this time period is\nexceeded, they are stopped and moved to the end of the list, allowing another\nround-robin thread of the same priority to run in their place. They can\notherwise continue to run until they are blocked by an I/O request or are\npreempted by threads of a higher priority.\n
See also
For information, see the Galera Cluster documentation
"},{"location":"wsrep-provider-index.html#gcsfc_auto_evict_threshold","title":"gcs.fc_auto_evict_threshold
","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0.75 Implemented in Percona XtraDB Cluster 8.0.33-25.
Defines the threshold that must be reached or crossed before a node is evicted from the cluster. This variable is a ratio of the gcs.fc_auto_evict_window
variable. The default value is .075
, but the value can be set to any value between 0.0 and 1.0.
gcs.fc_auto_evict_window
","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0 Implemented in Percona XtraDB Cluster 8.0.33-25.
The variable defines the time window width within which flow controls are observed. The time span of the window is [now - gcs.fc_audot_evict_window, now]. The window is constantly moving ahead as the time passes. And now, within this window if the flow control summary time >= (gcs.fc_audot-evict_window * gcs.fc_audot_evict_threshold), the node self-leaves the cluster.
The default value is 0, which means that the feature is disabled.
The maximum value is DBL_MAX
.
gcs.fc_debug
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable specifies after how many writesets the debug statistics about SST flow control will be posted.
"},{"location":"wsrep-provider-index.html#gcsfc_factor","title":"gcs.fc_factor
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1 This variable is used for replication flow control. Replication is resumed when the replica queue drops below gcs.fc_factor
* gcs.fc_limit
.
gcs.fc_limit
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 100 This variable is used for replication flow control. Replication is paused when the replica queue exceeds this limit. In the default operation mode, flow control limit is dynamically recalculated based on the amount of nodes in the cluster, but this recalculation can be turned off with use of the gcs.fc_master_slave
variable to make manual setting of the gcs.fc_limit
having an effect (e.g., for configurations when writing is done to a single node in Percona XtraDB Cluster).
gcs.fc_master_slave
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: NO Default Value: NO This variable is used to specify if there is only one source node in the cluster. It affects whether flow control limit is recalculated dynamically (when NO
) or not (when YES
).
gcs.max_packet_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 64500 This variable is used to specify the writeset size after which they will be fragmented.
"},{"location":"wsrep-provider-index.html#gcsmax_throttle","title":"gcs.max_throttle
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25 This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to 0.0
if stopping replication is acceptable in order to finish state transfer.
gcs.recv_q_hard_limit
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 9223372036854775807 This variable specifies the maximum allowed size of the receive queue. This should normally be (RAM + swap) / 2
. If this limit is exceeded, Galera will abort the server.
gcs.recv_q_soft_limit
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25 This variable specifies the fraction of the gcs.recv_q_hard_limit
after which replication rate will be throttled.
gcs.sync_donor
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to YES
, the whole cluster will be blocked if the donor node is blocked with SST.
gmcast.listen_addr
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: tcp://0.0.0.0:4567 This variable defines the address on which the node listens to connections from other nodes in the cluster.
"},{"location":"wsrep-provider-index.html#gmcastmcast_addr","title":"gmcast.mcast_addr
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: None This variable should be set up if UDP multicast should be used for replication.
"},{"location":"wsrep-provider-index.html#gmcastmcast_ttl","title":"gmcast.mcast_ttl
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1 This variable can be used to define TTL for multicast packets.
"},{"location":"wsrep-provider-index.html#gmcastpeer_timeout","title":"gmcast.peer_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S This variable specifies the connection timeout to initiate message relaying.
"},{"location":"wsrep-provider-index.html#gmcastsegment","title":"gmcast.segment
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable specifies the group segment this member should be a part of. Same segment members are treated as equally physically close.
"},{"location":"wsrep-provider-index.html#gmcasttime_wait","title":"gmcast.time_wait
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.
"},{"location":"wsrep-provider-index.html#gmcastversion","title":"gmcast.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This variable shows which gmcast protocol version is being used.
"},{"location":"wsrep-provider-index.html#istrecv_addr","title":"ist.recv_addr
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address
This variable specifies the address on which the node listens for Incremental State Transfer (IST).
"},{"location":"wsrep-provider-index.html#pcannounce_timeout","title":"pc.announce_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S Cluster joining announcements are sent every \u00bd second for this period of time or less if other nodes are discovered.
"},{"location":"wsrep-provider-index.html#pcchecksum","title":"pc.checksum
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true This variable controls whether replicated messages should be checksummed or not.
"},{"location":"wsrep-provider-index.html#pcignore_quorum","title":"pc.ignore_quorum
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false When this variable is set to TRUE
, the node will completely ignore quorum calculations. This should be used with extreme caution even in source-replica setups, because replicas won\u2019t automatically reconnect to source in this case.
pc.ignore_sb
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: false When this variable is set to TRUE
, the node will process updates even in the case of a split brain. This should be used with extreme caution in multi-source setup, but should simplify things in source-replica cluster (especially if only 2 nodes are used).
pc.linger
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT20S This variable specifies the period for which the PC protocol waits for EVS termination.
"},{"location":"wsrep-provider-index.html#pcnpvo","title":"pc.npvo
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false When this variable is set to TRUE
, more recent primary components override older ones in case of conflicting primaries.
pc.recovery
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true When this variable is set to true
, the node stores the Primary Component state to disk. The Primary Component can then recover automatically when all nodes that were part of the last saved state re-establish communication with each other. This feature allows automatic recovery from full cluster crashes, such as in the case of a data center power outage. A subsequent graceful full cluster restart will require explicit bootstrapping for a new Primary Component.
pc.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This status variable is used to check which PC protocol version is used.
"},{"location":"wsrep-provider-index.html#pcwait_prim","title":"pc.wait_prim
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true When set to TRUE
, the node waits for a primary component for the period of time specified in pc.wait_prim_timeout
. This is useful to bring up a non-primary component and make it primary with pc.bootstrap
.
pc.wait_prim_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT30S This variable is used to specify the period of time to wait for a primary component.
"},{"location":"wsrep-provider-index.html#pcwait_restored_prim_timeout","title":"pc.wait_restored_prim_timeout
","text":"Introduced in Percona XtraDB Cluster 8.0.33-25.
Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic No Default Value: PT0SThis variable specifies the wait period for a primary component when the cluster restores the primary component from the gvwstate.dat
file after an outage.
The default value is PT0S
(zero seconds). The node waits for an infinite time, which is the current behavior.
You can define a wait time with PTNS
, replace the N
value with the number of seconds. For example, to wait for 90 seconds, set the value to PT90S
.
pc.weight
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1 This variable specifies the node weight that\u2019s going to be used for Weighted Quorum calculations.
"},{"location":"wsrep-provider-index.html#protonetbackend","title":"protonet.backend
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: asio This variable is used to define which transport backend should be used. Currently only ASIO
is supported.
protonet.version
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0 This status variable is used to check which transport backend protocol version is used.
"},{"location":"wsrep-provider-index.html#replcausal_read_timeout","title":"repl.causal_read_timeout
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S This variable specifies the causal read timeout.
"},{"location":"wsrep-provider-index.html#replcommit_order","title":"repl.commit_order
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 3 This variable is used to specify out-of-order committing (which is used to improve parallel applying performance). The following values are available:
0
- BYPASS: all commit order monitoring is turned off (useful for measuring performance penalty)
1
- OOOC: allow out-of-order committing for all transactions
2
- LOCAL_OOOC: allow out-of-order committing only for local transactions
3
- NO_OOOC: no out-of-order committing is allowed (strict total order committing)
repl.key_format
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: FLAT8 This variable is used to specify the replication key format. The following values are available:
FLAT8
- short key with higher probability of key match false positives
FLAT16
- longer key with lower probability of false positives
FLAT8A
- same as FLAT8
but with annotations for debug purposes
FLAT16A
- same as FLAT16
but with annotations for debug purposes
repl.max_ws_size
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2147483647 This variable is used to specify the maximum size of a write-set in bytes. This is limited to 2 gygabytes.
"},{"location":"wsrep-provider-index.html#replproto_max","title":"repl.proto_max
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 7 This variable is used to specify the highest communication protocol version to accept in the cluster. Used only for debugging.
"},{"location":"wsrep-provider-index.html#socketchecksum","title":"socket.checksum
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2 This variable is used to choose the checksum algorithm for network packets. The CRC32-C
option is optimized and may be hardware accelerated on Intel CPUs. The following values are available:
0
- disable checksum
1
- plain CRC32
(used in Galera 2.x)
2
- hardware accelerated CRC32-C
The following is an example of the variable use:
wsrep_provider_options=\"socket.checksum=2\"\n
"},{"location":"wsrep-provider-index.html#socketssl","title":"socket.ssl
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No This variable is used to specify if SSL encryption should be used.
"},{"location":"wsrep-provider-index.html#socketssl_ca","title":"socket.ssl_ca
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No This variable is used to specify the path to the Certificate Authority (CA) certificate file.
"},{"location":"wsrep-provider-index.html#socketssl_cert","title":"socket.ssl_cert
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No This variable is used to specify the path to the server\u2019s certificate file (in PEM format).
"},{"location":"wsrep-provider-index.html#socketssl_key","title":"socket.ssl_key
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No This variable is used to specify the path to the server\u2019s private key file (in PEM format).
"},{"location":"wsrep-provider-index.html#socketssl_compression","title":"socket.ssl_compression
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: Yes This variable is used to specify if the SSL compression is to be used.
"},{"location":"wsrep-provider-index.html#socketssl_cipher","title":"socket.ssl_cipher
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: AES128-SHA This variable is used to specify what cypher will be used for encryption.
"},{"location":"wsrep-provider-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-status-index.html","title":"Index of wsrep status variables","text":""},{"location":"wsrep-status-index.html#wsrep_apply_oooe","title":"wsrep_apply_oooe
","text":"This variable shows parallelization efficiency, how often writests have been applied out of order.
See also
Galera status variable: wsrep_apply_oooe
wsrep_apply_oool
","text":"This variable shows how often a writeset with a higher sequence number was applied before one with a lower sequence number.
See also
Galera status variable: wsrep_apply_oool
wsrep_apply_window
","text":"Average distance between highest and lowest concurrently applied sequence numbers.
See also
Galera status variable: wsrep_apply_window
wsrep_causal_reads
","text":"Shows the number of writesets processed while the variable wsrep_causal_reads
was set to ON
.
See also
MySQL wsrep options: wsrep_causal_reads
wsrep_cert_bucket_count
","text":"This variable, shows the number of cells in the certification index hash-table.
"},{"location":"wsrep-status-index.html#wsrep_cert_deps_distance","title":"wsrep_cert_deps_distance
","text":"Average distance between highest and lowest sequence number that can be possibly applied in parallel.
See also
Galera status variable: wsrep_cert_deps_distance
wsrep_cert_index_size
","text":"Number of entries in the certification index.
See also
Galera status variable: wsrep_cert_index_size
wsrep_cert_interval
","text":"Average number of write-sets received while a transaction replicates.
See also
Galera status variable: wsrep_cert_interval
wsrep_cluster_conf_id
","text":"Number of cluster membership changes that have taken place.
See also
Galera status variable: wsrep_cluster_conf_id
wsrep_cluster_size
","text":"Current number of nodes in the cluster.
See also
Galera status variable: wsrep_cluster_size
wsrep_cluster_state_uuid
","text":"This variable contains UUID state of the cluster. When this value is the same as the one in wsrep_local_state_uuid
, node is synced with the cluster.
See also
Galera status variable: wsrep_cluster_state_uuid
wsrep_cluster_status
","text":"Status of the cluster component. Possible values are:
Primary
Non-Primary
Disconnected
See also
Galera status variable: wsrep_cluster_status
wsrep_commit_oooe
","text":"This variable shows how often a transaction was committed out of order.
See also
Galera status variable: wsrep_commit_oooe
wsrep_commit_oool
","text":"This variable currently has no meaning.
See also
Galera status variable: wsrep_commit_oool
wsrep_commit_window
","text":"Average distance between highest and lowest concurrently committed sequence number.
See also
Galera status variable: wsrep_commit_window
wsrep_connected
","text":"This variable shows if the node is connected to the cluster. If the value is OFF
, the node has not yet connected to any of the cluster components. This may be due to misconfiguration.
See also
Galera status variable: wsrep_connected
wsrep_evs_delayed
","text":"Comma separated list of nodes that are considered delayed. The node format is <uuid>:<address>:<count>
, where <count>
is the number of entries on delayed list for that node.
See also
Galera status variable: wsrep_evs_delayed
wsrep_evs_evict_list
","text":"List of UUIDs of the evicted nodes.
See also
Galera status variable: wsrep_evs_evict_list
wsrep_evs_repl_latency
","text":"This status variable provides information regarding group communication replication latency. This latency is measured in seconds from when a message is sent out to when a message is received.
The format of the output is <min>/<avg>/<max>/<std_dev>/<sample_size>
.
See also
Galera status variable: wsrep_evs_repl_latency
wsrep_evs_state
","text":"Internal EVS protocol state.
See also
Galera status variable: wsrep_evs_state
wsrep_flow_control_interval
","text":"This variable shows the lower and upper limits for Galera flow control. The upper limit is the maximum allowed number of requests in the queue. If the queue reaches the upper limit, new requests are denied. As existing requests get processed, the queue decreases, and once it reaches the lower limit, new requests will be allowed again.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_high","title":"wsrep_flow_control_interval_high
","text":"Shows the upper limit for flow control to trigger.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_low","title":"wsrep_flow_control_interval_low
","text":"Shows the lower limit for flow control to stop.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_paused","title":"wsrep_flow_control_paused
","text":"Time since the last status query that was paused due to flow control.
See also
Galera status variable: wsrep_flow_control_paused
wsrep_flow_control_paused_ns
","text":"Total time spent in a paused state measured in nanoseconds.
See also
Galera status variable: wsrep_flow_control_paused_ns
wsrep_flow_control_recv
","text":"The number of FC_PAUSE
events received since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.
See also
Galera status variable: wsrep_flow_control_recv
wsrep_flow_control_requested
","text":"This variable returns whether or not a node requested a replication pause.
"},{"location":"wsrep-status-index.html#wsrep_flow_control_sent","title":"wsrep_flow_control_sent
","text":"The number of FC_PAUSE
events sent since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.
See also
Galera status variable: wsrep_flow_control_sent
wsrep_flow_control_status
","text":"This variable shows whether a node has flow control enabled for normal traffic. It does not indicate the status of flow control during SST.
"},{"location":"wsrep-status-index.html#wsrep_gcache_pool_size","title":"wsrep_gcache_pool_size
","text":"This variable shows the size of the page pool and dynamic memory allocated for GCache (in bytes).
"},{"location":"wsrep-status-index.html#wsrep_gcomm_uuid","title":"wsrep_gcomm_uuid
","text":"This status variable exposes UUIDs in gvwstate.dat
, which are Galera view IDs (thus unrelated to cluster state UUIDs). This UUID is unique for each node. You will need to know this value when using manual eviction feature.
See also
Galera status variable: wsrep_gcomm_uuid
wsrep_incoming_addresses
","text":"Shows the comma-separated list of incoming node addresses in the cluster.
See also
Galera status variable: wsrep_incoming_addresses
wsrep_ist_receive_status
","text":"This variable displays the progress of IST for joiner node. If IST is not running, the value is blank. If IST is running, the value is the percentage of transfer completed.
"},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_end","title":"wsrep_ist_receive_seqno_end
","text":"The sequence number of the last transaction in IST.
"},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_current","title":"wsrep_ist_receive_seqno_current
","text":"The sequence number of the current transaction in IST.
"},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_start","title":"wsrep_ist_receive_seqno_start
","text":"The sequence number of the first transaction in IST.
"},{"location":"wsrep-status-index.html#wsrep_last_applied","title":"wsrep_last_applied
","text":"Sequence number of the last applied transaction.
"},{"location":"wsrep-status-index.html#wsrep_last_committed","title":"wsrep_last_committed
","text":"Sequence number of the last committed transaction.
"},{"location":"wsrep-status-index.html#wsrep_local_bf_aborts","title":"wsrep_local_bf_aborts
","text":"Number of local transactions that were aborted by replica transactions while being executed.
See also
Galera status variable: wsrep_local_bf_aborts
wsrep_local_cached_downto
","text":"The lowest sequence number in GCache. This information can be helpful with determining IST and SST. If the value is 0
, then it means there are no writesets in GCache (usual for a single node).
See also
Galera status variable: wsrep_local_cached_downto
wsrep_local_cert_failures
","text":"Number of writesets that failed the certification test.
See also
Galera status variable: wsrep_local_cert_failures
wsrep_local_commits
","text":"Number of writesets commited on the node.
See also
Galera status variable: wsrep_local_commits
wsrep_local_index
","text":"Node\u2019s index in the cluster.
See also
Galera status variable: wsrep_local_index
wsrep_local_recv_queue
","text":"Current length of the receive queue (that is, the number of writesets waiting to be applied).
See also
Galera status variable: wsrep_local_recv_queue
wsrep_local_recv_queue_avg
","text":"Average length of the receive queue since the last status query. When this number is bigger than 0
this means node can\u2019t apply writesets as fast as they are received. This could be a sign that the node is overloaded and it may cause replication throttling.
See also
Galera status variable: wsrep_local_recv_queue_avg
wsrep_local_replays
","text":"Number of transaction replays due to asymmetric lock granularity.
See also
Galera status variable: wsrep_local_replays
wsrep_local_send_queue
","text":"Current length of the send queue (that is, the number of writesets waiting to be sent).
See also
Galera status variable: wsrep_local_send_queue
wsrep_local_send_queue_avg
","text":"Average length of the send queue since the last status query. When cluster experiences network throughput issues or replication throttling, this value will be significantly bigger than 0
.
See also
Galera status variable: wsrep_local_send_queue_avg
wsrep_local_state
","text":"Internal Galera cluster FSM state number.
See also
Galera status variable: wsrep_local_state
wsrep_local_state_comment
","text":"Internal number and the corresponding human-readable comment of the node\u2019s state. Possible values are:
Num Comment Description 1 Joining Node is joining the cluster 2 Donor/Desynced Node is the donor to the node joining the cluster 3 Joined Node has joined the cluster 4 Synced Node is synced with the clusterSee also
Galera status variable: wsrep_local_state_comment
wsrep_local_state_uuid
","text":"The UUID of the state stored on the node.
See also
Galera status variable: wsrep_local_state_uuid
wsrep_monitor_status
","text":"The status of the local monitor (local and replicating actions), apply monitor (apply actions of write-set), and commit monitor (commit actions of write sets). In the value of this variable, each monitor (L: Local, A: Apply, C: Commit) is represented as a last_entered, and last_left pair:
wsrep_monitor_status (L/A/C) [ ( 7, 5), (2, 2), ( 2, 2) ]\n
last_entered
Shows which transaction or write-set has recently entered the queue.
last_left
Shows which last transaction or write-set has been executed and left the queue.
According to the Galera protocol, transactions can be applied in parallel but must be committed in a given order. This rule implies that there can be multiple transactions in the apply state at a given point of time but transactions are committed sequentially.
See also
Galera Documentation: Database replication
wsrep_protocol_version
","text":"Version of the wsrep protocol used.
See also
Galera status variable: wsrep_protocol_version
wsrep_provider_name
","text":"Name of the wsrep provider (usually Galera
).
See also
Galera status variable: wsrep_provider_name
wsrep_provider_vendor
","text":"Name of the wsrep provider vendor (usually Codership Oy
)
See also
Galera status variable: wsrep_provider_vendor
wsrep_provider_version
","text":"Current version of the wsrep provider.
See also
Galera status variable: wsrep_provider_version
wsrep_ready
","text":"This variable shows if node is ready to accept queries. If status is OFF
, almost all queries will fail with ERROR 1047 (08S01) Unknown Command
error (unless the wsrep_on
variable is set to 0
).
See also
Galera status variable: wsrep_ready
wsrep_received
","text":"Total number of writesets received from other nodes.
See also
Galera status variable: wsrep_received
wsrep_received_bytes
","text":"Total size (in bytes) of writesets received from other nodes.
"},{"location":"wsrep-status-index.html#wsrep_repl_data_bytes","title":"wsrep_repl_data_bytes
","text":"Total size (in bytes) of data replicated.
"},{"location":"wsrep-status-index.html#wsrep_repl_keys","title":"wsrep_repl_keys
","text":"Total number of keys replicated.
"},{"location":"wsrep-status-index.html#wsrep_repl_keys_bytes","title":"wsrep_repl_keys_bytes
","text":"Total size (in bytes) of keys replicated.
"},{"location":"wsrep-status-index.html#wsrep_repl_other_bytes","title":"wsrep_repl_other_bytes
","text":"Total size of other bits replicated.
"},{"location":"wsrep-status-index.html#wsrep_replicated","title":"wsrep_replicated
","text":"Total number of writesets sent to other nodes.
See also
Galera status variable: wsrep_replicated
wsrep_replicated_bytes
","text":"Total size of replicated writesets. To compute the actual size of bytes sent over network to cluster peers, multiply the value of this variable by the number of cluster peers in the given network segment
.
See also
Galera status variable: wsrep_replicated_bytes
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"wsrep-system-index.html","title":"Index of wsrep system variables","text":"Percona XtraDB Cluster introduces a number of MySQL system variables related to write-set replication.
"},{"location":"wsrep-system-index.html#pxc_encrypt_cluster_traffic","title":"pxc_encrypt_cluster_traffic
","text":"Option Description Command Line: --pxc-encrypt-cluster-traffic
Config File: Yes Scope: Global Dynamic: No Default Value: ON
Enables automatic configuration of SSL encryption. When disabled, you need to configure SSL manually to encrypt Percona XtraDB Cluster traffic.
Possible values:
ON
, 1
, true
: Enabled (default)
OFF
, 0
, false
: Disabled
For more information, see SSL Automatic Configuration.
"},{"location":"wsrep-system-index.html#pxc_maint_mode","title":"pxc_maint_mode
","text":"Option Description Command Line: --pxc-maint-mode
Config File: Yes Scope: Global Dynamic: Yes Default Value: DISABLED
Specifies the maintenance mode for taking a node down without adjusting settings in ProxySQL.
The following values are available:
DISABLED
: This is the default state that tells ProxySQL to route traffic to the node as usual.
SHUTDOWN
: This state is set automatically when you initiate node shutdown.
MAINTENANCE
: You can manually change to this state if you need to perform maintenance on a node without shutting it down.
For more information, see Assisted Maintenance Mode.
"},{"location":"wsrep-system-index.html#pxc_maint_transition_period","title":"pxc_maint_transition_period
","text":"Option Description Command Line: --pxc-maint-transition-period
Config File: Yes Scope: Global Dynamic: Yes Default Value: 10
(ten seconds) Defines the transition period when you change pxc_maint_mode
to SHUTDOWN
or MAINTENANCE
. By default, the period is set to 10 seconds, which should be enough for most transactions to finish. You can increase the value to accommodate for longer-running transactions.
For more information, see Assisted Maintenance Mode.
"},{"location":"wsrep-system-index.html#pxc_strict_mode","title":"pxc_strict_mode
","text":"Option Description Command Line: --pxc-strict-mode
Config File: Yes Scope: Global Dynamic: Yes Default Value: ENFORCING
or DISABLED
Controls PXC Strict Mode, which runs validations to avoid the use of experimental and unsupported features in Percona XtraDB Cluster.
Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:
DISABLED
: Do not perform strict mode validations and run as normal.
PERMISSIVE
: If a validation fails, log a warning and continue running as normal.
ENFORCING
: If a validation fails during startup, halt the server and throw an error. If a validation fails during runtime, deny the operation and throw an error.
MASTER
: The same as ENFORCING
except that the validation of explicit table locking is not performed. This mode can be used with clusters in which write operations are isolated to a single node.
By default, pxc_strict_mode
is set to ENFORCING
, except if the node is acting as a standalone server or the node is bootstrapping, then pxc_strict_mode
defaults to DISABLED
.
Note
When changing the value of pxc_strict_mode
from DISABLED
or PERMISSIVE
to ENFORCING
or MASTER
, ensure that the following configuration is used:
wsrep_replicate_myisam=OFF
binlog_format=ROW
log_output=FILE
or log_output=NONE
or log_output=FILE,NONE
The SERIALIZABLE
method of isolation is not allowed in ENFORCING
mode.
For more information, see PXC Strict Mode.
"},{"location":"wsrep-system-index.html#wsrep_applier_fk_checks","title":"wsrep_applier_FK_checks
","text":"Option Description Command Line: --wsrep-applier-FK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_FK_checks
variable is deprecated in favor of this variable.
Defines whether foreign key checking is done for applier threads. This is enabled by default.
See also
MySQL wsrep option: wsrep_applier_FK_checks
wsrep_applier_threads
","text":"Option Description Command Line: --wsrep-applier-threads
Config File: Yes Scope: Global Dynamic: Yes Default Value: 1
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_threads
variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads
variable.
Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.
Note
When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).
If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1
to see if that resolves the issue. The default value can be increased for better throughput.
You may want to increase it as suggested in Codership documentation for flow control
: when the node is in JOINED
state, increasing the number of replica threads can speed up the catchup to SYNCED
.
You can also estimate the optimal value for this from wsrep_cert_deps_distance
as suggested in the Galera Cluster documentation.
For more configuration tips, see Setting Parallel Slave Threads`.
See also
MySQL wsrep option: wsrep_applier_threads
wsrep_applier_UK_checks
","text":"Option Description Command Line: --wsrep-applier-UK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_UK_checks
variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks
variable.
Defines whether unique key checking is done for applier threads. This is disabled by default.
See also
MySQL wsrep option: wsrep_applier_UK_checks
wsrep_auto_increment_control
","text":"Option Description Command Line: --wsrep-auto-increment-control
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
Enables automatic adjustment of auto-increment system variables depending on the size of the cluster:
auto_increment_increment
controls the interval between successive AUTO_INCREMENT
column values
auto_increment_offset
determines the starting point for the AUTO_INCREMENT
column value
This helps prevent auto-increment replication conflicts across the cluster by giving each node its own range of auto-increment values. It is enabled by default.
Automatic adjustment may not be desirable depending on application\u2019s use and assumptions of auto-increments. It can be disabled in source-replica clusters.
See also
MySQL wsrep option: wsrep_auto_increment_control
wsrep_causal_reads
","text":"Option Description Command Line: --wsrep-causal-reads
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: OFF
In some cases, the source may apply events faster than a replica, which can cause source and replica to become out of sync for a brief moment. When this variable is set to ON
, the replica will wait until that event is applied before doing any other queries. Enabling this variable will result in larger latencies.
Note
This variable was deprecated because enabling it is the equivalent of setting wsrep_sync_wait
to 1
.
See also
MySQL wsrep option: wsrep_causal_reads
wsrep_certification_rules
","text":"Option Description Command Line: --wsrep-certification-rules
Config File: Yes Scope: Global Dynamic: Yes Values: STRICT, OPTIMIZED Default Value: STRICT This variable controls how certification is done in the cluster, in particular this affects how foreign keys are handled.
STRICT Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table may result in the certification failure.
OPTIMIZED Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table will not result in the certification failure.
See also
Galera Cluster Documentation: MySQL wsrep options
"},{"location":"wsrep-system-index.html#wsrep_certify_nonpk","title":"wsrep_certify_nonPK
","text":"Option Description Command Line: --wsrep-certify-nonpk
Config File: Yes Scope: Global Dynamic: No Default Value: ON
Enables automatic generation of primary keys for rows that don\u2019t have them. Write set replication requires primary keys on all tables to allow for parallel applying of transactions. This variable is enabled by default. As a rule, make sure that all tables have primary keys.
See also
MySQL wsrep option: wsrep_certify_nonPK
"},{"location":"wsrep-system-index.html#wsrep_cluster_address","title":"wsrep_cluster_address
","text":"Option Description Command Line: --wsrep-cluster-address
Config File: Yes Scope: Global Dynamic: Yes Defines the back-end schema, IP addresses, ports, and options that the node uses when connecting to the cluster. This variable needs to specify at least one other node\u2019s address, which is alive and a member of the cluster. In practice, it is best (but not necessary) to provide a complete list of all possible cluster nodes. The value should be of the following format:
<schema>://<address>[?<option1>=<value1>[&<option2>=<value2>]],...\n
The only back-end schema currently supported is gcomm
. The IP address can contain a port number after a colon. Options are specified after ?
and separated by &
. You can specify multiple addresses separated by commas.
For example:
wsrep_cluster_address=\"gcomm://192.168.0.1:4567?gmcast.listen_addr=0.0.0.0:5678\"\n
If an empty gcomm://
is provided, the node will bootstrap itself (that is, form a new cluster). It is not recommended to have empty cluster address in production config after the cluster has been bootstrapped initially. If you want to bootstrap a new cluster with a node, you should pass the --wsrep-new-cluster
option when starting.
See also
MySQL wsrep option: wsrep_cluster_address
"},{"location":"wsrep-system-index.html#wsrep_cluster_name","title":"wsrep_cluster_name
","text":"Option Description Command Line: --wsrep-cluster-name
Config File: Yes Scope: Global Dynamic: No Default Value: my_wsrep_cluster
Specifies the name of the cluster and must be identical on all nodes. A node checks the value when attempting to connect to the cluster. If the names match, the node connects.
Edit the value in the my.cnf
in the [galera] section.
[galera]\n\n wsrep_cluster_name=simple-cluster\n
Execute SHOW VARIABLES
with the LIKE operator to view the variable:
mysql> SHOW VARIABLES LIKE 'wsrep_cluster_name';\n
Expected output +--------------------+----------------+\n| Variable_name | Value |\n+--------------------+----------------+\n| wsrep_cluster_name | simple-cluster |\n+--------------------+----------------+\n
Note
It should not exceed 32 characters. A node cannot join the cluster if the cluster names do not match. You must re-bootstrap the cluster after a name change.
See also
MySQL wsrep option: wsrep_cluster_name
"},{"location":"wsrep-system-index.html#wsrep_data_home_dir","title":"wsrep_data_home_dir
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql
(or whatever path is specified by datadir
) Specifies the path to the directory where the wsrep provider stores its files (such as grastate.dat
).
See also
MySQL wsrep option: wsrep_data_home_dir
"},{"location":"wsrep-system-index.html#wsrep_dbug_option","title":"wsrep_dbug_option
","text":"Option Description Command Line: --wsrep-dbug-option
Config File: Yes Scope: Global Dynamic: Yes Defines DBUG
options to pass to the wsrep provider.
See also
MySQL wsrep option: wsrep_dbug_option
"},{"location":"wsrep-system-index.html#wsrep_debug","title":"wsrep_debug
","text":"Option Description Command Line: --wsrep-debug
Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE
Enables debug level logging for the database server and wsrep-lib
- an integration library for WSREP API with additional convenience for transaction processing. By default, --wsrep-debug
variable is disabled.
This variable can be used when trying to diagnose problems or when submitting a bug.
You can set wsrep_debug
in the following my.cnf
groups:
Under [mysqld]
it enables debug logging for mysqld
and the SST script.
Under [sst]
it enables debug logging for the SST script only.
This variable may be set to one of the following values:
NONE
No debug-level messages.
SERVER
wsrep-lib
general debug-level messages and detailed debug-level messages from the server_state part are printed out. Galera debug-level logs are printed out.
TRANSACTION
Same as SERVER + wsrep-lib transaction part
STREAMING
Same as TRANSACTION + wsrep-lib streaming part
CLIENT
Same as STREAMING + wsrep-lib client_service part
Note
Do not enable debugging in production environments, because it logs authentication info (that is, passwords).
See also
MySQL wsrep option: wsrep_debug
"},{"location":"wsrep-system-index.html#wsrep_desync","title":"wsrep_desync
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
Defines whether the node should participate in Flow Control. By default, this variable is disabled, meaning that if the receive queue becomes too big, the node engages in Flow Control: it works through the receive queue until it reaches a more manageable size. For more information, see wsrep_local_recv_queue
and wsrep_flow_control_interval
.
Enabling this variable will disable Flow Control for the node. It will continue to receive write-sets that it is not able to apply, the receive queue will keep growing, and the node will keep falling behind the cluster indefinitely.
Toggling this back to OFF
will require an IST or an SST, depending on how long it was desynchronized. This is similar to cluster desynchronization, which occurs during RSU TOI. Because of this, it\u2019s not a good idea to enable wsrep_desync
for a long period of time or for several nodes at once.
Note
You can also desync a node using the /\\*! WSREP_DESYNC \\*/
query comment.
See also
MySQL wsrep option: wsrep_desync
"},{"location":"wsrep-system-index.html#wsrep_dirty_reads","title":"wsrep_dirty_reads
","text":"Option Description Command Line: --wsrep-dirty-reads
Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: OFF
Defines whether the node accepts read queries when in a non-operational state, that is, when it loses connection to the Primary Component. By default, this variable is disabled and the node rejects all queries, because there is no way to tell if the data is correct.
If you enable this variable, the node will permit read queries (USE
, SELECT
, LOCK TABLE
, and UNLOCK TABLES
), but any command that modifies or updates the database on a non-operational node will still be rejected (including DDL and DML statements, such as INSERT
, DELETE
, and UPDATE
).
To avoid deadlock errors, set the wsrep_sync_wait
variable to 0
if you enable wsrep_dirty_reads
.
As of Percona XtraDB Cluster 8.0.26-16, you can update the variable with a set_var hint
.
mysql> SELECT @@wsrep_dirty_reads;\n
Expected output +-----------------------+\n| @@wsrep_dirty_reads |\n+=======================+\n| OFF |\n+-----------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_dirty_reads=ON) */ @@wsrep_dirty_reads;\n
Expected output +-----------------------+\n| @@wsrep_dirty_reads |\n+=======================+\n| ON |\n+-----------------------+\n
See also
MySQL wsrep option: wsrep_dirty_reads
"},{"location":"wsrep-system-index.html#wsrep_drupal_282555_workaround","title":"wsrep_drupal_282555_workaround
","text":"Option Description Command Line: --wsrep-drupal-282555-workaround
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
Enables a workaround for MySQL InnoDB bug that affects Drupal (Drupal bug #282555 and MySQL bug #41984). In some cases, duplicate key errors would occur when inserting the DEFAULT
value into an AUTO_INCREMENT
column.
See also
MySQL wsrep option: wsrep_drupal_282555_workaround
"},{"location":"wsrep-system-index.html#wsrep_forced_binlog_format","title":"wsrep_forced_binlog_format
","text":"Option Description Command Line: --wsrep-forced-binlog-format
Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE
Defines a binary log format that will always be effective, regardless of the client session binlog_format
variable value.
Possible values for this variable are:
ROW
: Force row-based logging format
STATEMENT
: Force statement-based logging format
MIXED
: Force mixed logging format
NONE
: Do not force the binary log format and use whatever is set by the binlog_format
variable (default)
See also
MySQL wsrep option: wsrep_forced_binlog_format
"},{"location":"wsrep-system-index.html#wsrep_ignore_apply_errors","title":"wsrep_ignore_apply_errors
","text":"Option Description Command Line: --wsrep-ignore-apply-errors
Config File: Yes Scope: Global Dynamic: Yes Default Value: 0 Defines the rules of wsrep applier behavior on errors. You can change the settings by editing the my.cnf
file under [mysqld]
or at runtime.
Note
In Percona XtraDB Cluster version 8.0.19-10, the default value has changed from 7
to 0
. If you have been working with an earlier version of the PXC 8.0 series, you may see different behavior when upgrading to this version or later.
The variable has the following options:
Value Description WSREP_IGNORE_ERRORS_NONE All replication errors are treated as errors and will shutdown the node (default behavior) WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL DROP DATABASE, DROP TABLE, DROP INDEX, ALTER TABLE are converted to a warning if they result in ER_DB_DROP_EXISTS, ER_BAD_TABLE_ERROR OR ER_CANT_DROP_FIELD_OR_KEY errors WSREP_IGNORE_ERRORS_ON_RECONCILING_DML DELETE events are treated as warnings if they failed because the deleted row was not found (ER_KEY_NOT_FOUND) WSREP_IGNORE_ERRORS_ON_DDL All DDL errors will be treated as a warning WSREP_IGNORE_ERRORS_MAX Infers WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML and WSREP_IGNORE_ERRORS_ON_DDLSetting the variable between 0
and 7
has the following behavior:
wsrep_min_log_verbosity
","text":"Option Description Command Line: --wsrep-min-log-verbosity
Config File: Yes Scope: Global Dynamic: Yes Default Value: 3 This variable defines the minimum logging verbosity of wsrep/Galera and acts in conjunction with the log_error_verbosity
variable. The wsrep_min_log_verbosity
has the same values as log_error_verbosity
.
The actual log verbosity of wsrep/Galera can be greater than the value of wsrep_min_log_verbosity
if log_error_verbosity
is greater than wsrep_min_log_verbosity
.
A few examples:
log_error_verbosity wsrep_min_log_verbosity MySQL Logs Verbosity wsrep Logs Verbosity 2 3 system error, warning system error, warning, info 1 3 system error system error, warning, info 1 2 system error system error, warning 3 1 system error, warning, info system error, warning, infoNote the case where log_error_verbosity=3
and wsrep_min_log_verbosity=1
. The actual log verbosity of wsrep/Galera is 3 (system error, warning, info) because log_error_verbosity
is greater.
See also
MySQL Documentation: log_error_verbosity
Galera Cluster Documentation: Database Server Logs
"},{"location":"wsrep-system-index.html#wsrep_load_data_splitting","title":"wsrep_load_data_splitting
","text":"Option Description Command Line: --wsrep-load-data-splitting
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
Defines whether the node should split large LOAD DATA
transactions. This variable is enabled by default, meaning that LOAD DATA
commands are split into transactions of 10 000 rows or less.
If you disable this variable, then huge data loads may prevent the node from completely rolling the operation back in the event of a conflict, and whatever gets committed stays committed.
Note
It doesn\u2019t work as expected with autocommit=0
when enabled.
See also
MySQL wsrep option: wsrep_load_data_splitting
"},{"location":"wsrep-system-index.html#wsrep_log_conflicts","title":"wsrep_log_conflicts
","text":"Option Description Command Line: --wsrep-log-conflicts
Config File: Yes Scope: Global Dynamic: No Default Value: OFF
Defines whether the node should log additional information about conflicts. By default, this variable is disabled and Percona XtraDB Cluster uses standard logging features in MySQL.
If you enable this variable, it will also log table and schema where the conflict occurred, as well as the actual values for keys that produced the conflict.
See also
MySQL wsrep option: wsrep_log_conflicts
"},{"location":"wsrep-system-index.html#wsrep_max_ws_rows","title":"wsrep_max_ws_rows
","text":"Option Description Command Line: --wsrep-max-ws-rows
Config File: Yes Scope: Global Dynamic: Yes Default Value: 0
(no limit) Defines the maximum number of rows each write-set can contain.
By default, there is no limit for the maximum number of rows in a write-set. The maximum allowed value is 1048576
.
See also
MySQL wsrep option: wsrep_max_ws_rows
"},{"location":"wsrep-system-index.html#wsrep_max_ws_size","title":"wsrep_max_ws_size
","text":"Option Description Command Line: --wsrep_max_ws_size
Config File: Yes Scope: Global Dynamic: Yes Default Value: 2147483647
(2 GB) Defines the maximum write-set size (in bytes). Anything bigger than the specified value will be rejected.
You can set it to any value between 1024
and the default 2147483647
.
See also
MySQL wsrep option: wsrep_max_ws_size
"},{"location":"wsrep-system-index.html#wsrep_mode","title":"wsrep_mode
","text":"Option Description Command Line: --wsrep-mode
Config File: Yes Scope: Global Dynamic: Yes Default Value: This variable has been implemented in Percona XtraDB Cluster 8.0.31.
Defines the node behavior according to a specified value. The value is empty or disabled by default.
The available values are:
Empty
- does not change the node behavior.
IGNORE_NATIVE_REPLICATION_FILTER_RULES
- changes the wsrep
behavior to ignore native replication filter rules.
See also
MySQL wsrep option: wsrep_mode
"},{"location":"wsrep-system-index.html#wsrep_node_address","title":"wsrep_node_address
","text":"Option Description Command Line: --wsrep-node-address
Config File: Yes Scope: Global Dynamic: No Default Value: IP of the first network interface (eth0
) and default port (4567
) Specifies the network address of the node. By default, this variable is set to the IP address of the first network interface (usually eth0
or enp2s0
) and the default port (4567
).
While default value should be correct in most cases, there are situations when you need to specify it manually. For example:
Servers with multiple network interfaces
Servers that run multiple nodes
Network Address Translation (NAT)
Clusters with nodes in more than one region
Container deployments, such as Docker
Cloud deployments, such as Amazon EC2 (use the global DNS name instead of the local IP address)
The value should be specified in the following format:
<ip_address>[:port]\n
Note
The value of this variable is also used as the default value for the wsrep_sst_receive_address
variable and the ist.recv_addr
option.
See also
MySQL wsrep option: wsrep_node_address
"},{"location":"wsrep-system-index.html#wsrep_node_incoming_address","title":"wsrep_node_incoming_address
","text":"Option Description Command Line: --wsrep-node-incoming-address
Config File: Yes Scope: Global Dynamic: No Default Value: AUTO
Specifies the network address from which the node expects client connections. By default, it uses the IP address from wsrep_node_address
and port number 3306.
This information is used for the wsrep_incoming_addresses
variable which shows all active cluster nodes.
See also
MySQL wsrep option: wsrep_node_incoming_address
"},{"location":"wsrep-system-index.html#wsrep_node_name","title":"wsrep_node_name
","text":"Option Description Command Line: --wsrep-node-name
Config File: Yes Scope: Global Dynamic: Yes Default Value: The node\u2019s host name Defines a unique name for the node. Defaults to the host name.
In many situations, you may use the value of this variable as a means to identify the given node in the cluster as the alternative to using the node address (the value of the wsrep_node_address
).
Note
The variable wsrep_sst_donor
is an example where you may only use the value of wsrep_node_name
and the node address is not permitted.
wsrep_notify_cmd
","text":"Option Description Command Line: --wsrep-notify-cmd
Config File: Yes Scope: Global Dynamic: No Specifies the notification command that the node should execute whenever cluster membership or local node status changes. This can be used for alerting or to reconfigure load balancers.
Note
The node will block and wait until the command or script completes and returns before it can proceed. If the script performs any potentially blocking or long-running operations, such as network communication, you should consider initiating such operations in the background and have the script return immediately.
See also
MySQL wsrep option: wsrep_notify_cmd
"},{"location":"wsrep-system-index.html#wsrep_on","title":"wsrep_on
","text":"Option Description Command Line: No Config File: No Scope: Session Dynamic: Yes Default Value: ON
Defines if current session transaction changes for a node are replicated to the cluster.
If set to OFF
for a session, no transaction changes are replicated in that session. The setting does not cause the node to leave the cluster, and the node communicates with other nodes.
See also
MySQL wsrep option: wsrep_on
"},{"location":"wsrep-system-index.html#wsrep_osu_method","title":"wsrep_OSU_method
","text":"Option Description Command Line: --wsrep-OSU-method
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: TOI
Defines the method for Online Schema Upgrade that the node uses to replicate DDL statements.
For information on the available methods, see Online Schema upgrade and for information on Non-blocking operations, see NBO.
See also
MySQL wsrep option: wsrep_OSU_method
"},{"location":"wsrep-system-index.html#wsrep_provider","title":"wsrep_provider
","text":"Option Description Command Line: --wsrep-provider
Config File: Yes Scope: Global Dynamic: No Specifies the path to the Galera library. This is usually /usr/lib64/libgalera_smm.so
on CentOS/RHEL and /usr/lib/libgalera_smm.so
on Debian/Ubuntu.
If you do not specify a path or the value is not valid, the node will behave as standalone instance of MySQL.
See also
MySQL wsrep option: wsrep_provider
"},{"location":"wsrep-system-index.html#wsrep_provider_options","title":"wsrep_provider_options
","text":"Option Description Command Line: --wsrep-provider-options
Config File: Yes Scope: Global Dynamic: No Specifies optional settings for the replication provider documented in Index of :variable:`wsrep_provider` options. These options affect how various situations are handled during replication.
See also
MySQL wsrep option: wsrep_provider_options
"},{"location":"wsrep-system-index.html#wsrep_recover","title":"wsrep_recover
","text":"Option Description Command Line: --wsrep-recover
Config File: Yes Scope: Global Dynamic: No Default Value: OFF
Location: mysqld_safe` Recovers database state after crash by parsing GTID from the log. If the GTID is found, it will be assigned as the initial position for server.
"},{"location":"wsrep-system-index.html#wsrep_reject_queries","title":"wsrep_reject_queries
","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE
Defines whether the node should reject queries from clients. Rejecting queries can be useful during upgrades, when you want to keep the node up and apply write-sets without accepting queries.
When a query is rejected, the following error is returned:
Error 1047: Unknown command\n
The following values are available:
NONE
: Accept all queries from clients (default)
ALL
: Reject all new queries from clients, but maintain existing client connections
ALL_KILL
: Reject all new queries from clients and kill existing client connections
Note
This variable doesn\u2019t affect Galera replication in any way, only the applications that connect to the database are affected. If you want to desync a node, use wsrep_desync
.
See also
MySQL wsrep option: wsrep_reject_queries
"},{"location":"wsrep-system-index.html#wsrep_replicate_myisam","title":"wsrep_replicate_myisam
","text":"Option Description Command Line: --wsrep-replicate-myisam
Config File: Yes Scope: Session, Global Dynamic: No Default Value: OFF
Defines whether DML statements for MyISAM tables should be replicated. It is disabled by default, because MyISAM replication is still experimental.
On the global level, wsrep_replicate_myisam
can be set only during startup. On session level, you can change it during runtime as well.
For older nodes in the cluster, wsrep_replicate_myisam
should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended when wsrep_replicate_myisam
is disabled, since if any table in the list is MyISAM, the whole DDL statement is not put under TOI.
Note
You should keep in mind the following when using MyISAM replication:
DDL (CREATE/DROP/TRUNCATE) statements on MyISAM will be replicated irrespective of wsrep_replicate_myisam
value
DML (INSERT/UPDATE/DELETE) statements on MyISAM will be replicated only ifwsrep_replicate_myisam
is enabled
SST will get full transfer irrespective of wsrep_replicate_myisam
value (it will get MyISAM tables from donor)
Difference in configuration of pxc-cluster
node on enforce_storage_engine front may result in picking up different engine for the same table on different nodes
CREATE TABLE AS SELECT
(CTAS) statements use TOI replication. MyISAM tables are created and loaded even if wsrep_replicate_myisam
is set to ON.
wsrep_restart_replica
","text":"Option Description Command Line: --wsrep-restart-replica
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave
variable is deprecated in favor of this variable.
Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.
See also
MySQL wsrep option: wsrep_restart_slave
"},{"location":"wsrep-system-index.html#wsrep_restart_slave","title":"wsrep_restart_slave
","text":"Option Description Command Line: --wsrep-restart-slave
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave
variable is deprecated and may be removed in later versions. Use wsrep_restart_replica
.
Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.
"},{"location":"wsrep-system-index.html#wsrep_retry_autocommit","title":"wsrep_retry_autocommit
","text":"Option Description Command Line: --wsrep-retry-autocommit
Config File: Yes Scope: Global Dynamic: No Default Value: 1
Specifies the number of times autocommit transactions will be retried in the cluster if it encounters certification errors. In case there is a conflict, it should be safe for the cluster node to simply retry the statement without returning an error to the client, hoping that it will pass next time.
This can be useful to help an application using autocommit to avoid deadlock errors that can be triggered by replication conflicts.
If this variable is set to 0
, autocommit transactions won\u2019t be retried.
See also
MySQL wsrep option: wsrep_retry_autocommit
"},{"location":"wsrep-system-index.html#wsrep_rsu_commit_timeout","title":"wsrep_RSU_commit_timeout
","text":"Option Description Command Line: --wsrep-RSU-commit-timeout
Config File: Yes Scope: Global Dynamic: Yes Default Value: 5000
Range: From 5000
(5 milliseconds) to 31536000000000
(365 days) Specifies the timeout in microseconds to allow active connection to complete COMMIT action before starting RSU.
While running RSU it is expected that user has isolated the node and there is no active traffic executing on the node. RSU has a check to ensure this, and waits for any active connection in COMMIT
state before starting RSU.
By default this check has timeout of 5 milliseconds, but in some cases COMMIT is taking longer. This variable sets the timeout, and has allowed values from the range of (5 milliseconds, 365 days). The value is to be set in microseconds. Unit of variable is in micro-secs so set accordingly.
Note
RSU operation will not auto-stop node from receiving active traffic. So there could be a continuous flow of active traffic while RSU continues to wait, and that can result in RSU starvation. User is expected to block active RSU traffic while performing operation.
"},{"location":"wsrep-system-index.html#wsrep_slave_fk_checks","title":"wsrep_slave_FK_checks
","text":"Option Description Command Line: --wsrep-slave-FK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: ON
As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_FK_checks
variable.
Defines whether foreign key checking is done for applier threads. This is enabled by default.
"},{"location":"wsrep-system-index.html#wsrep_slave_threads","title":"wsrep_slave_threads
","text":"Option Description Command Line: --wsrep-slave-threads
Config File: Yes Scope: Global Dynamic: Yes Default Value: 1
As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads
variable.
Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.
Note
When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).
If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1
to see if that resolves the issue. The default value can be increased for better throughput.
You may want to increase it as suggested in Codership documentation for flow control
: when the node is in JOINED
state, increasing the number of replica threads can speed up the catchup to SYNCED
.
You can also estimate the optimal value for this from wsrep_cert_deps_distance
as suggested in the Galera Cluster documentation.
For more configuration tips, see this document.
"},{"location":"wsrep-system-index.html#wsrep_slave_uk_checks","title":"wsrep_slave_UK_checks
","text":"Option Description Command Line: --wsrep-slave-UK-checks
Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF
As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks
variable.
Defines whether unique key checking is done for applier threads. This is disabled by default.
"},{"location":"wsrep-system-index.html#wsrep_sr_store","title":"wsrep_SR_store
","text":"Option Description Command Line: --wsrep-sr-store
Config File: Yes Scope: Global Dynamic: No Default Value: table
Defines storage for streaming replication fragments. The available values are table
, the default value, and none
, which disables the variable.
wsrep_sst_allowed_methods
","text":"Option Description Command Line: --wsrep_sst_allowed_methods
Config File: Yes Scope: Global Dynamic: No Default Value: xtrabackup-v2
Percona XtraDB Cluster 8.0.20-11.3 adds this variable.
This variable limits SST methods accepted by the server for wsrep_sst_method variable. The default value is xtrabackup-v2
.
wsrep_sst_donor
","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Specifies a list of nodes (using their wsrep_node_name
values) that the current node should prefer as donors for SST and IST.
Warning
Using IP addresses of nodes instead of node names (the value of wsrep_node_name
) as values of wsrep_sst_donor
results in an error.
ERROR] WSREP: State transfer request failed unrecoverably: 113 (No route\nto host). Most likely it is due to inability to communicate with the\ncluster primary component. Restart required.\n
If the value is empty, the first node in SYNCED state in the index becomes the donor and will not be able to serve requests during the state transfer.
To consider other nodes if the listed nodes are not available, add a comma at the end of the list, for example:
wsrep_sst_donor=node1,node2,\n
If you remove the trailing comma from the previous example, then the joining node will consider only node1
and node2
.
Note
By default, the joiner node does not wait for more than 100 seconds to receive the first packet from a donor. This is implemented via the sst-initial-timeout
option. If you set the list of preferred donors without the trailing comma or believe that all nodes in the cluster can often be unavailable for SST (this is common for small clusters), then you may want to increase the initial timeout (or disable it completely if you don\u2019t mind the joiner node waiting for the state transfer indefinitely).
See also
MySQL wsrep option: wsrep_sst_donor
"},{"location":"wsrep-system-index.html#wsrep_sst_method","title":"wsrep_sst_method
","text":"Option Description Command Line: --wsrep-sst-method
Config File: Yes Scope: Global Dynamic: Yes Default Value: xtrabackup-v2 Defines the method or script for State Snapshot Transfer (SST).
Available values are:
xtrabackup-v2
: Uses Percona XtraBackup to perform SST. This value is the default. Privileges and permissions for running Percona XtraBackup can be found in Percona XtraBackup documentation. For more information, see Percona XtraBackup SST Configuration.
skip
: Use this to skip SST. Removed in Percona XtraDB Cluster 8.0.33-25. This value can be used when initially starting the cluster and manually restoring the same data to all nodes. This value should not be used permanently because it could lead to data inconsistency across the nodes.
ist_only
: Introduced in Percona XtraDB Cluster 8.0.33-25. This value allows only Incremental State Transfer (IST). If a node cannot sync with the cluster with IST, abort that node\u2019s start. This action leaves the data directory unchanged. This value prevents starting a node, after a manual backup restoration, that does not have a grastate.dat
file. This missing file could initiate a full-state transfer (SST) which can be a more time and resource-intensive operation.
Note
xtrabackup-v2
provides support for clusters with GTIDs and async replicas.
See also
MySQL wsrep option: wsrep_sst_method
"},{"location":"wsrep-system-index.html#wsrep_sst_receive_address","title":"wsrep_sst_receive_address
","text":"Option Description Command Line: --wsrep-sst-receive-address
Config File: Yes Scope: Global Dynamic: Yes Default Value: AUTO
Specifies the network address where donor node should send state transfers. By default, this variable is set to AUTO
, meaning that the IP address from wsrep_node_address
is used.
See also
MySQL wsrep option: wsrep_sst_receive_address
"},{"location":"wsrep-system-index.html#wsrep_start_position","title":"wsrep_start_position
","text":"Option Description Command Line: --wsrep-start-position
Config File: Yes Scope: Global Dynamic: Yes Default Value: 00000000-0000-0000-0000-00000000000000:-1
Specifies the node\u2019s start position as UUID:seqno
. By setting all the nodes to have the same value for this variable, the cluster can be set up without the state transfer.
See also
MySQL wsrep option: wsrep_start_position
"},{"location":"wsrep-system-index.html#wsrep_sync_wait","title":"wsrep_sync_wait
","text":"Option Description Command Line: --wsrep-sync-wait
Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: 0
Controls cluster-wide causality checks on certain statements. Checks ensure that the statement is executed on a node that is fully synced with the cluster.
As of Percona XtraDB Cluster 8.0.26-16, you are able to update the variable with a set_var hint.
mysql> SELECT @@wsrep_sync_wait;\n
Expected output +---------------------+\n| @@wsrep_sync_wait |\n+=====================+\n| 3 |\n+---------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_sync_wait=7) */ @@wsrep_sync_wait;\n
Expected output +---------------------+\n| @@wsrep_sync_wait |\n+=====================+\n| 7 |\n+---------------------+\n
Note
Causality checks of any type can result in increased latency.
The type of statements to undergo checks is determined by bitmask:
0
: Do not run causality checks for any statements. This is the default.
1
: Perform checks for READ
statements (including SELECT
, SHOW
, and BEGIN
or START TRANSACTION
).
2
: Perform checks for UPDATE
and DELETE
statements.
3
: Perform checks for READ
, UPDATE
, and DELETE
statements.
4
: Perform checks for INSERT
and REPLACE
statements.
5
: Perform checks for READ
, INSERT
, and REPLACE
statements.
6
: Perform checks for UPDATE
, DELETE
, INSERT
, and REPLACE
statements.
7
: Perform checks for READ
, UPDATE
, DELETE
, INSERT
, and REPLACE
statements.
Note
Setting wsrep_sync_wait
to 1
is the equivalent of setting the deprecated wsrep_causal_reads
to ON
.
See also
MySQL wsrep option: wsrep_sync_wait
"},{"location":"wsrep-system-index.html#wsrep_trx_fragment_size","title":"wsrep_trx_fragment_size
","text":"Option Description Command Line: --wsrep-trx-fragment-size
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: 0 Defines the the streaming replication fragment size. This variable is measured in the value defined by wsrep_trx_fragment_unit
. The minimum value is 0 and the maximum value is 2147483647.
As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.
mysql> SELECT @@@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_unit |\n+==============================+\n| statements |\n+------------------------------+\n| @@wsrep_trx_fragment_size |\n+------------------------------+\n| 3 |\n+------------------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_size=5) */ @@wsrep_trx_fragment_size;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_size |\n+==============================+\n| 5 |\n+------------------------------+\n
You can also use set_var() in a data manipulation language (DML) statement. This ability is useful when streaming large statements within a transaction.
node1> BEGIN;\nQuery OK, 0 rows affected (0.00 sec)\n\nnode1> INSERT /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ INTO t1 SELECT * FROM t1; \nQuery OK, 65536 rows affected (15.15 sec)\nRecords: 65536 Duplicates: 0 Warnings: 0\n\nnode1> UPDATE /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ t1 SET i=2;\nQuery OK, 131072 rows affected (1 min 35.93 sec)\nRows matched: 131072 Changed: 131072 Warnings: 0\n\nnode2> SET SESSION TRANSACTION_ISOLATION = 'READ-UNCOMMITTED';\nQuery OK, 0 rows affected (0.00 sec)\n\nnode2> SELECT * FROM t1 LIMIT 5;\n+---+\n| i |\n+===+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\nnode1> DELETE /*+SET_VAR(wsrep_trx_fragment_size = 10000)*/ FROM t1;\nQuery OK, 131072 rows affected (15.09 sec)\n
"},{"location":"wsrep-system-index.html#wsrep_trx_fragment_unit","title":"wsrep_trx_fragment_unit
","text":"Option Description Command Line: --wsrep-trx-fragment-unit
Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: \u201cbytes\u201d Defines the type of measure for the wsrep_trx_fragment_size
. The possible values are: bytes, rows, statements.
As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.
mysql> SELECT @@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_unit |\n+==============================+\n| statements |\n+------------------------------+\n| @@wsrep_trx_fragment_size |\n+------------------------------+\n| 3 |\n+------------------------------+\n
mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_unit=rows) */ @@wsrep_trx_fragment_unit;\n
Expected output +------------------------------+\n| @@wsrep_trx_fragment_unit |\n+==============================+\n| rows |\n+------------------------------+\n
"},{"location":"wsrep-system-index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"xtrabackup-sst.html","title":"Percona XtraBackup SST configuration","text":"Percona XtraBackup SST works in two stages:
First it identifies the type of data transfer based on the presence of xtrabackup_ist
file on the joiner node.
Then it starts data transfer. In case of SST, it empties the data directory except for some files (galera.cache
, sst_in_progress
, grastate.dat
) and then proceeds with SST.
In case of IST, it proceeds as before.
The following options specific to SST can be used in my.cnf
under [sst]
.
Note
Non-integer options which have no default value are disabled if not set.
:Match: Yes
implies that option should match on donor and joiner nodes.
SST script reads my.cnf
when it runs on either donor or joiner node, not during mysqld
startup.
SST options must be specified in the main my.cnf
file.
Used to specify the Percona XtraBackup streaming format. The only option is the xbstream
format. SST fails and generates an error when another format, such as tar
, is used.
For more information about the xbstream
format, see The xbstream Binary.
socat
, nc
Default: socat
Match: Yes Used to specify the data transfer format. The recommended value is the default transferfmt=socat
because it allows for socket options, such as transfer buffer sizes. For more information, see socat(1).
Note
Using transferfmt=nc
does not support the SSL-based encryption mode (value 4
for the encrypt
option).
Example: ssl-ca=/etc/ssl/certs/mycert.crt
Specifies the absolute path to the certificate authority (CA) file for socat
encryption based on OpenSSL.
Example: ssl-cert=/etc/ssl/certs/mycert.pem
Specifies the full path to the certificate file in the PEM format for socat
encryption based on OpenSSL.
Note
For more information about ssl-ca
and ssl-cert
, see https://www.dest-unreach.org/socat/doc/socat-openssltunnel.html. The ssl-ca
is essentially a self-signed certificate in that example, and ssl-cert
is the PEM file generated after concatenation of the key and the certificate generated earlier. The names of options were chosen to be compatible with socat
parameter names as well as with MySQL\u2019s SSL authentication. For testing you can also download certificates from launchpad.
Note
Irrespective of what is shown in the example, you can use the same .crt and .pem files on all nodes and it will work, since there is no server-client paradigm here, but rather a cluster with homogeneous nodes.
"},{"location":"xtrabackup-sst.html#ssl-key","title":"ssl-key","text":"Example: ssl-key=/etc/ssl/keys/key.pem
Used to specify the full path to the private key in PEM format for socat encryption based on OpenSSL.
"},{"location":"xtrabackup-sst.html#encrypt","title":"encrypt","text":"Parameter Description Values: 0, 4 Default: 4 Match: YesEnables SST encryption mode in Percona XtraBackup:
Set encrypt=0
to disable SST encryption.
Set encrypt=4
for SST encryption with SSL files generated by MySQL. This is the recommended value.
Considering that you have all three necessary files:
[sst]\nencrypt=4\nssl-ca=ca.pem\nssl-cert=server-cert.pem\nssl-key=server-key.pem\n
For more information, see Encrypting PXC Traffic.
"},{"location":"xtrabackup-sst.html#sockopt","title":"sockopt","text":"Used to specify key/value pairs of socket options, separated by commas, for example:
[sst]\nsockopt=\"retry=2,interval=3\"\n
The previous example causes socat to try to connect three times (initial attempt and two retries with a 3-second interval between attempts).
This option only applies when socat is used (transferfmt=socat
). For more information about socket options, see socat (1).
Note
You can also enable SSL based compression with sockopt
. This can be used instead of the Percona XtraBackup compress
option.
Used to specify socket options for the netcat
transfer format (transferfmt=nc
).
Values: 1, path/to/file
Used to specify where to write SST progress. If set to 1
, it writes to MySQL stderr
. Alternatively, you can specify the full path to a file. If this is a FIFO, it needs to exist and be open on reader end before itself, otherwise wsrep_sst_xtrabackup
will block indefinitely.
Note
Value of 0 is not valid.
"},{"location":"xtrabackup-sst.html#rebuild","title":"rebuild","text":"Parameter Description Values: 0, 1 Default: 0Used to enable rebuilding of index on joiner node. This is independent of compaction, though compaction enables it. Rebuild of indexes may be used as an optimization.
Note
#1192834 affects this option.
"},{"location":"xtrabackup-sst.html#time","title":"time","text":"Parameter Description Values: 0, 1 Default: 0Enabling this option instruments key stages of backup and restore in SST.
"},{"location":"xtrabackup-sst.html#rlimit","title":"rlimit","text":"Example: rlimit=128k
Used to set a a ratelimit in bytes. Add a suffix (k, m, g, t) to specify units. For example, 128k
is 128 kilobytes. For more information, see pv(1).
Note
Rate is limited on donor node. The rationale behind this is to not allow SST to saturate the donor\u2019s regular cluster operations or to limit the rate for other purposes.
"},{"location":"xtrabackup-sst.html#use_extra","title":"use_extra","text":"Parameter Description Values: 0, 1 Default: 0Used to force SST to use the thread pool\u2019s extra_port. Make sure that thread pool is enabled and the extra_port
option is set in my.cnf
before you enable this option.
Default: '.\\*\\\\.pem$\\\\|.\\*init\\\\.ok$\\\\|.\\*galera\\\\.cache$\\\\|.\\*sst_in_progress$\\\\|.\\*\\\\.sst$\\\\|.\\*gvwstate\\\\.dat$\\\\|.\\*grastate\\\\.dat$\\\\|.\\*\\\\.err$\\\\|.\\*\\\\.log$\\\\|.\\*RPM_UPGRADE_MARKER$\\\\|.\\*RPM_UPGRADE_HISTORY$'
Used to define the files that need to be retained in the datadir before running SST, so that the state of the other node can be restored cleanly.
For example:
[sst]\ncpat='.*galera\\.cache$\\|.*sst_in_progress$\\|.*grastate\\.dat$\\|.*\\.err$\\|.*\\.log$\\|.*RPM_UPGRADE_MARKER$\\|.*RPM_UPGRADE_HISTORY$\\|.*\\.xyz$'\n
Note
This option can only be used when wsrep_sst_method
is set to xtrabackup-v2
(which is the default value).
Stream-based compression and decompression are performed on the stream, in contrast to performing decompression after streaming to disk, which involves additional I/O. The savings are considerable, up to half the I/O on the JOINER node.
You can use any compression utility which works on stream: gzip
, pigz
, zstd
, and others. The pigz
or zstd
options are multi-threaded. At a minimum, the compressor must be set on the DONOR and the decompressor on JOINER.
You must install the related binaries, otherwise SST aborts.
compressor=\u2019pigz\u2019 decompressor=\u2019pigz -dc\u2019
compressor=\u2019gzip\u2019 decompressor=\u2019gzip -dc\u2019
To revert to the XtraBackup-based compression, set compress
under [xtrabackup]
. You can define both the compressor and the decompressor, although you will be wasting CPU cycles.
[xtrabackup]\ncompress\n\n-- compact has led to some crashes\n
"},{"location":"xtrabackup-sst.html#inno-backup-opts","title":"inno-backup-opts","text":""},{"location":"xtrabackup-sst.html#inno-apply-opts","title":"inno-apply-opts","text":""},{"location":"xtrabackup-sst.html#inno-move-opts","title":"inno-move-opts","text":"Parameter Description Default: Empty Type: Quoted String This group of options is used to pass XtraBackup options for backup, apply, and move stages. The SST script doesn\u2019t alter, tweak, or optimize these options.
Note
Although these options are related to XtraBackup SST, they cannot be specified in my.cnf
, because they are for passing innobackupex options.
This option is used to configure initial timeout (in seconds) to receive the first packet via SST. This has been implemented, so that if the donor node fails somewhere in the process, the joiner node will not hang up and wait forever.
By default, the joiner node will not wait for more than 100 seconds to get a donor node. The default should be sufficient, however, it is configurable, so you can set it appropriately for your cluster. To disable initial SST timeout, set sst-initial-timeout=0
.
Note
If you are using wsrep_sst_donor
, and you want the joiner node to strictly wait for donors listed in the variable and not fall back (that is, without a terminating comma at the end), and there is a possibility of all nodes in that variable to be unavailable, disable initial SST timeout or set it to a higher value (maximum threshold that you want the joiner node to wait). You can also disable this option (or set it to a higher value) if you believe all other nodes in the cluster can potentially become unavailable at any point in time (mostly in small clusters) or there is a high network latency or network disturbance (which can cause donor selection to take longer than 100 seconds).
This option configures the time the SST operation waits on the joiner to receive more data. The size of the joiner\u2019s sst directory is checked for the amount of data received. For example, the directory has received 50MB of data. The operation rechecks the data size after the default value, 120 seconds, has elapsed. If the data size is still 50MB, this operation is aborted. If the data has increased, the operation continues.
An example of setting the option:
[sst]\nsst-idle-timeout=0\n
"},{"location":"xtrabackup-sst.html#tmpdir","title":"tmpdir","text":"Parameter Description Default: Empty Unit: /path/to/tmp/dir This option specifies the location for storing the temporary file on a donor node where the transaction log is stored before streaming or copying it to a remote host.
Note
This option can be used on joiner node to specify non-default location to receive temporary SST files. This location must be large enough to hold the contents of the entire database. If tmpdir is empty then default location datadir/.sst will be used.
The tmpdir
option can be set in the following my.cnf
groups:
[sst]
is the primary location (others are ignored)
[xtrabackup]
is the secondary location (if not specified under [sst]
)
[mysqld]
is used if it is not specified in either of the above
wsrep_debug
Specifies whether additional debugging output for the database server error log should be enabled. Disabled by default.
This option can be set in the following my.cnf
groups:
Under [mysqld]
it enables debug logging for mysqld
and the SST script
Under [sst]
it enables debug logging for the SST script only
4
Specifies the number of threads that XtraBackup should use for encrypting data (when encrypt=1
). The value is passed using the --encrypt-threads
option in XtraBackup.
This option affects only SST with XtraBackup and should be specified under the [sst]
group.
4
Specifies the number of threads that XtraBackup should use to create backups. See the --parallel
option in XtraBackup.
This option affects only SST with XtraBackup and should be specified under the [sst]
group.
Each suppored version of Percona XtraDB Cluster is tested against a specific version of Percona XtraBackup:
Percona XtraDB Cluster 5.6 requires Percona XtraBackup 2.3
Percona XtraDB Cluster 5.7 requires Percona XtraBackup 2.4
Percona XtraDB Cluster 8.0 requires Percona XtraBackup 8.0
Other combinations are not guaranteed to work.
The following are optional dependencies of Percona XtraDB Cluster introduced by wsrep_sst_xtrabackup-v2
(except for obvious and direct dependencies):
qpress
for decompression. It is an optional dependency of Percona XtraBackup and it is available in our software repositories.
my_print_defaults
to extract values from my.cnf
. Provided by the server package.
openbsd-netcat
or socat
for transfer. socat
is a direct dependency of Percona XtraDB Cluster and it is the default.
xbstream
or tar
for streaming. xbstream
is the default.
pv
is required for progress
and rlimit
.
mkfifo
is required for progress
. Provided by coreutils
.
mktemp
is required. Provided by coreutils
.
which
is required.
Settings related to XtraBackup-based Encryption are no longer allowed in PXC 8.0 when used for SST. If it is detected that XtraBackup-based Encryption is enabled, PXC will produce an error.
The XtraBackup-based Encryption is enabled when you specify any of the following options under [xtrabackup]
in my.cnf
:
encrypt
encrypt-key
encrypt-key-file
The amount of memory for XtraBackup is defined by the --use-memory
option. You can pass it using the inno-apply-opts
option under [sst]
as follows:
[sst]\ninno-apply-opts=\"--use-memory=500M\"\n
If it is not specified, the use-memory
option under [xtrabackup]
will be used:
[xtrabackup]\nuse-memory=32M\n
If neither of the above are specified, the size of the InnoDB memory buffer will be used:
[mysqld]\ninnodb_buffer_pool_size=24M\n
"},{"location":"xtrabackup-sst.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"xtradb-cluster-version-numbers.html","title":"Understand version numbers","text":"A version number identifies the product release. The product contains the latest Generally Available (GA) features at the time of that release.
8.0.20 -11. 2 Base version Minor build Custom buildPercona uses semantic version numbering, which follows the pattern of base version, minor build, and an optional custom build. Percona assigns unique, non-negative integers in increasing order for each minor build release. The version number combines the base Percona Server for MySQL version number, the minor build version, and the custom build version, if needed.
The version numbers for Percona XtraDB Cluster 8.0.20-11.2 define the following information:
Base version - the leftmost set of numbers that indicate the Percona Server for MySQL version used as a base. An increase in the base version resets the minor build version and the custom build version to 0.
Minor build version - an internal number that increases with every Percona XtraDB Cluster release, and the custom build number is reset to 0.
Custom build version - an optional number assigned to custom builds used for bug fixes. The features don\u2019t change unless the fixes include those features. For example, Percona XtraDB Cluster 8.0.20-11.1, 8.0.20-11.2, and 8.0.20-11.3 are based on the same Percona Server for MySQL version and minor build version but are custom build versions.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"yum.html","title":"Install Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS","text":"A list of the supported platforms by products and versions is available in Percona Software and Platform Lifecycle.
We gather Telemetry data in the Percona packages and Docker images.
You can install Percona XtraDB Cluster with the following methods:
Use the official repository using YUM
Download and manually install the Percona XtraDB Cluster packages from Percona Product Downloads.
Use the Percona Software repositories
This documentation describes using the Percona Software repositories.
"},{"location":"yum.html#prerequisites","title":"Prerequisites","text":"Installing Percona XtraDB Cluster requires that you either are logged in as a user with root privileges or can run commands with sudo.
Percona XtraDB Cluster requires the specific ports for communication. Make sure that the following ports are available:
3306
4444
4567
4568
For information on SELinux, see Enabling SELinux.
"},{"location":"yum.html#install-from-percona-software-repository","title":"Install from Percona Software Repository","text":"For more information on the Percona Software repositories and configuring Percona Repositories with percona-release
, see the Percona Software Repositories Documentation.
$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release enable-only pxc-80 release\n$ sudo percona-release enable tools release\n$ sudo yum install percona-xtradb-cluster\n
$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release setup pxc-80\n$ sudo yum install percona-xtradb-cluster\n
"},{"location":"yum.html#after-installation","title":"After installation","text":"After the installation, start the mysql
service and find the temporary password using the grep
command.
$ sudo service mysql start\n$ sudo grep 'temporary password' /var/log/mysqld.log\n
Use the temporary password to log into the server:
$ mysql -u root -p\n
Run an ALTER USER
statement to change the temporary password, exit the client, and stop the service.
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass';\nmysql> exit\n$ sudo service mysql stop\n
"},{"location":"yum.html#next-steps","title":"Next steps","text":"Configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.
"},{"location":"yum.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.29-21.html","title":"Percona XtraDB Cluster 8.0.29-21 (2022-09-12)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.29-21.html#release-highlights","title":"Release Highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.29-21 (2022-08-08) release notes.
The improvements and bug fixes for MySQL 8.0.29, provided by Oracle, and included in Percona Server for MySQL are the following:
The Performance Schema tracks if a query was processed on the PRIMARY engine, InnoDB, or a SECONDARY engine, HeatWave. An EXECUTION_ENGINE column, which indicates the engine used, was added to the Performance Schema statement event tables and the sys.processlist and the sys.x$processlist views.
Added support for the IF NOT EXISTS option for the CREATE FUNCTION, CREATE PROCEDURE, and CREATE TRIGGER statements.
Added support for ALTER TABLE \u2026 DROP COLUMN ALGORITHM=INSTANT.
An anonymous user with the PROCESS privilege was unable to select processlist table rows.
Find the full list of bug fixes and changes in the MySQL 8.0.29 Release Notes.
Note
Percona Server for MySQL has changed the default for the supported DDL column operations to ALGORITHM=INPLACE. This change fixes the corruption issue with the INSTANT ADD/DROP COLUMNS (find more details in PS-8292.
In MySQL 8.0.29, the default setting for supported DDL operations is ALGORITHM=INSTANT. You can explicitly specify ALGORITHM=INSTANT in DDL column operations.
"},{"location":"release-notes/8.0.29-21.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3982: When the replica node is also an asynchronous slave and, while joining the cluster, this node was not ready to accept connections, a SQL thread failed at the start.
PXC-3118: A fix for when, using a thread pool, a brute force abort for a metadata locking (MDL) subsystem conflict stalled.
PXC-3999: The cluster was stalled on Waiting for acl cache lock
with concurrent user DDL commands.
Debian 9 is no longer supported.
"},{"location":"release-notes/8.0.29-21.html#useful-links","title":"Useful Links","text":"The Percona XtraDB Cluster installation instructions
The Percona XtraDB Cluster downloads
The Percona XtraDB Cluster GitHub location
To contribute to the documentation, review the Documentation Contribution Guide
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.30-22.html","title":"Percona XtraDB Cluster 8.0.30-22.md (2022-12-28)","text":"Release date December 28, 2022 Install instructions Install Percona XtraDB Cluster Download this version Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
For paid support, managed services or consulting services, contact Percona Sales.
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.30-22.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.30-22 (2022-11-21) release notes.
Note
The following Percona Server for MySQL 8.0.30 features are not supported in this version of Percona XtraDB Cluster:
Amazon Key Management Service
Key Management Interoperability Protocol
The features will be supported in the next version of Percona XtraDB Cluster.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.30 and included in Percona Server for MySQL are the following:
Supports Generated Invisible Primary Keys(GIPK). This feature automatically adds a primary key to InnoDB tables without a primary key. The generated key is always named my_row_id
. The GIPK feature is not enabled by default. Enable the feature by setting sql_generate_invisible_primary_key
to ON.
The InnoDB_doublewrite system has two new settings:
DETECT_ONLY
. This setting allows only metadata to be written to the doublewrite buffer. Database page content is not written to the buffer. Recovery does not use the buffer to fix incomplete page writes. Use this setting only when you need to detect incomplete page writes.
DETECT_AND_RECOVER
. This setting is equivalent to the current ON setting. The doublewrite buffer is enabled. Database page content is written to the buffer and the buffer is accessed to fix incomplete page writes during recovery.
The -skip_host_cache
server option is deprecated and will be removed in a future release. Use SET GLOBAL host_cache_size
= 0 or set host_cache_size
= 0.
Find the full list of bug fixes and changes in the MySQL 8.0.30 release notes.
"},{"location":"release-notes/8.0.30-22.html#bug-fixes","title":"Bug fixes","text":"PXC-3639: The buffer overflow was not considered when using strncpy
in WSREP
patch.
PXC-3821: The truncation of the performance_schema
table on a node was replicated across the cluster.
PXC-4012: The replica node left the cluster when executing CREATE USER
with password_history
option simultaneously.
PXC-4033: When the prepared statement is executed in parallel to the DDL modifying the table that the prepared statement uses, the server fails with an assertion saying that the prepared statement transaction was aborted, so it cannot be committed.
PXC-4048: gra_x_y_v2.log
files created in case of failures were empty.
Percona XtraDB Cluster 8.0.30-22 supports Oracle Linux/Red Hat Enterprise Linux 9.
Percona XtraDB Cluster 8.0.30-22 supports Ubuntu 22.04.
The Percona XtraDB Cluster GitHub location
Contribute to the documentation
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.31-23.2.html","title":"Percona XtraDB Cluster 8.0.31-23.2 (2023-04-04)","text":"Release date April 04, 2023 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.31-23.2.html#release-highlights","title":"Release highlights","text":"This release of Percona XtraDB Cluster 8.0.31-23 includes the fix to the security vulnerability CVE-2022-25834 with PXB-2977.
"},{"location":"release-notes/8.0.31-23.2.html#useful-links","title":"Useful links","text":"The Percona XtraDB Cluster GitHub location
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.31-23.2.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.31-23.html","title":"Percona XtraDB Cluster 8.0.31-23 (2023-03-14)","text":"Release date 2024-04-03 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.31-23.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.31-23 (2022-11-21) release notes.
This release adds the following feature in tech preview:
Improvements and bug fixes introduced by Oracle for MySQL 8.0.31 and included in Percona Server for MySQL are the following:
MySQL adds support for the SQL standard INTERSECT
and EXCEPT
table operators.
InnoDB supports parallel index builds. This improves index build performance. The sorted index entries are loaded into a B-tree in a multithread. In previous releases, this action was performed by a single thread.
The Performance and sys schemas show metrics for the global and session memory limits introduced in MySQL 8.0.28.
The following columns have been added to the Performance Schema tables:
Performance Schema tables Columns SETUP_INSTRUMENTS FLAGS THREADS CONTROLLED_MEMORY, MAX_CONTROLLED_MEMORY, TOTAL_MEMORY, MAX_TOTAL_MEMORY EVENTS_STATEMENTS_CURRENT, EVENTS_STATEMENTS_HISTORY, EVENTS_STATEMENTS_HISTORY_LONG MAX_CONTROLLED_MEMORY, MAX_TOTAL_MEMORY Statement Summary Tables MAX_CONTROLLED_MEMORY, MAX_TOTAL_MEMORY Performance Schema Connection Tables MAX_SESSION_CONTROLLED_MEMORY, MAX_SESSION_TOTAL_MEMORY PREPARED_STATEMENTS_INSTANCES MAX_CONTROLLED_MEMORY, MAX_TOTAL_MEMORYThe following columns have been added to the sys schema STATEMENT_ANALYSIS
and X$STATEMENT_ANALYSIS
views:
MAX_CONTROLLED_MEMORY
MAX_TOTAL_MEMORY
The controlled_by_default
flag has been added to the PROPERTIES
column of the SETUP_INSTRUMENTS
table.
Now, you can add and remove non-global memory instruments to the set of controlled-memory instruments. To do this, set the value of the FLAGS
column of SETUP_INSTRUMENTS
.
SQL> UPDATE PERFORMANCE_SCHEMA.SETUP_INTRUMENTS SET FLAGS=\"controlled\" \nWHERE NAME='memory/sql/NET::buff';\n
The audit_log_flush
variable has been deprecated and will be removed in future releases.
Find the full list of bug fixes and changes in the MySQL 8.0.31 Release Notes.
"},{"location":"release-notes/8.0.31-23.html#new-features","title":"New Features","text":"Added support for GCache and Write-Set encryption.
PXC-3574: Added support for the wsrep_mode
variable.
PXC-3989: Added support for keyring components.
PXC-4077: Injecting an empty transaction caused GTID inconsistencies between nodes.
PXC-4120: Enabling wsrep-debug created multiple entries of wsrep_commit_empty()
in the Error log.
PXC-4126: When stream replication and TOI are active, the CREATE USER
statement was not allowed.
PXC-4116: A PXC replica node stalled with parallel asynchronous parallel replication.
PXC-4148: A fix for the MDL conflict db= ticket=10 solved by abort
error.
The Percona XtraDB Cluster GitHub location
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.31-23.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.32-24.2.html","title":"Percona XtraDB Cluster 8.0.32-24.2 (2023-05-24)","text":"Release date May 24, 2023 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.32-24.2.html#release-highlights","title":"Release highlights","text":"This release of Percona XtraDB Cluster 8.0.32-24 includes the fix for PXC-4211.
"},{"location":"release-notes/8.0.32-24.2.html#bug-fixes","title":"Bug fixes","text":"PXC-4211: The server exited on the binary log rotation.
PXC-4217: The cluster can intermittently abort a node on an insert query.
PXC-4222: A node abruptly leaving the cluster causes the applier thread to hang on all the remaining nodes.
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.32-24.2.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.32-24.html","title":"Percona XtraDB Cluster 8.0.32-24 (2023-04-18)","text":"Release date April 18, 2023 Install instructions Install Percona XtraDB ClusterPercona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.32-24.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.32-24 (2023-03-20) release notes.
Percona decided to revert the following MySQL bug fix:
The data and the GTIDs backed up by mysqldump were inconsistent when the options --single-transaction
and --set-gtid-purged=ON
were both used. It was because in between the transaction started by mysqldump and the fetching of GTID_EXECUTED, GTIDs on the server could have increased already. With this fixed, a FLUSH TABLES WITH READ LOCK
is performed before the fetching of GTID_EXECUTED
to ensure its value is consistent with the snapshot taken by mysqldump.
The MySQL fix also added a requirement when using \u2013single-transaction and executing FLUSH TABLES WITH READ LOCK for the RELOAD privilege. (MySQL bug #109701, MySQL bug #105761)
The Percona Server version of the mysqldump
utility, in some modes, can be used with MySQL Server. This utility provides a temporary workaround for the \u201cadditional RELOAD privilege\u201d limitation introduced by Oracle MySQL Server 8.0.32.
For more information, see the Percona Performance Blog A Workaround for the \u201cRELOAD/FLUSH_TABLES privilege required\u201d Problem When Using Oracle mysqldump 8.0.32.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.32 and included in Percona Server for MySQL are the following:
A replica can add a Generated Invisible Primary Keys(GIPK) to any InnoDB table. To achieve this behavior, the GENERATE
value is added as a possible value for the CHANGE REPLICATION SOURCE TO
statement\u2019s REQUIRE_TABLE_PRIMARY_KEY_CHECK
option.
The REQUIRE_TABLE_PRIMARY_KEY_CHECK = GENERATE
option can be used on a per-channel basis.
Setting sql_generate_invisible_primary_key
on the source is ignored by a replica because this variable is not replicated. This behavior is inherited from the previous releases.
An upgrade from 8.0.28 caused undetectable problems, such as server exit and corruption.
A fix for after an upgrade, all columns added with ALGORITHM=INSTANT
materialized and have version=0
for any new row inserted. Now, a column added with ALGORITHM=INSTANT
fails if the maximum possible size of a row exceeds the row size limit, so that all new rows with materialized ALGORITHM=INSTANT
columns are within row size limit. (Bug #34558510)
After a drop, adding a specific column using the INSTANT algorithm could cause a data error and a server exit. (Bug #34122122)
An online rebuild DDL no longer crashes after a column is added with ALGORITHM=INSTANT
. Thank you Qingda Hu for reporting this bug. (Bug #33788578, Bug #106279)
PXC-3936: State transfer with disabled SSL in wsrep_provider_options
option crashed the Receiver and Donor nodes.
PXC-3976: The wsrep
status vars
were not updated when 8.0 node joined the 5.7 cluster.
PXC-4137: The WSREP
applier threads failed to modify read-only schemas.
PXC-4162: When doing a rolling upgrade from 5.7 to 8.0, wsrep_cluster_size
was 0.
PXC-4163: The pxc_strict_mode
option did not detect version mismatch.
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.32-24.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.33-25.html","title":"Percona XtraDB Cluster 8.0.33-25 (2023-08-02)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.33-25.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.33-25 (2023-06-15) release notes.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.33 and included in Percona XtraDB Cluster are the following:
The INSTALL COMPONENT
includes the SET
clause. The SET
clause sets the values of component system variables when installing one or several components. This reduces the inconvenience and limitations associated with assigning variable values in other ways.
The mysqlbinlog --start-position
accepts values up to 18446744073709551615
. If the --read-from-remote-server
or --read-from-remote-source
option is used, the maximum is 4294967295
. (Bug #77818, Bug #21498994)
Using a generated column with DEFAULT(col_name)
to specify the default value for a named column is not allowed and throws an error message. (Bug #34463652, Bug #34369580)
Not all possible error states were reported during the binary log recovery process. (Bug #33658850)
User-defined collations are deprecated. The usage of the following user-defined collations causes a warning that is written to the log:
When COLLATE
is followed by the name of a user-defined collation in an SQL statement.
When the name of a user-defined collation is used as the value of collation_server
, collation_database
, or collation_connection
.
The support for user-defined collations will be removed in a future releases of MySQL.
Find the full list of bug fixes and changes in the MySQL 8.0.33 Release Notes.
"},{"location":"release-notes/8.0.33-25.html#new-features","title":"New features","text":"PXC-667: Unexpected exit during the BF-abort of active HANDLER <table> OPEN AS <alias>
.
PXC-679: An undetected state gap discovery causes the server to hang on shutdown.
PXC-4222: When a node abruptly leaves the applier thread causes all the other nodes in the cluster to hang.
PXC-4225: In the INFORMATION_SCHEMA.PROCCESSLIST, the COMMAND value is incorrect.
PXC-4228: The NBO mode corrupted the binary log.
PXC-4233: A cluster state interruption during NBO can lead to a permanent cluster lock.
PXC-4253: The merge to 8.0.33 fixes a number of CVE vulnerabilities.
PXC-4258: A failure to add a foreign key resulted in an inconsistency.
PXC-4268: If the ALTER DEFINER VIEW
was changed with insufficient privileges, the Percona XtraDB Cluster node gets a Disconnected/Inconsistent state
PXC-4278: Renaming a table with NBO caused a server exit.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.33-25.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.33-25.upd.html","title":"Percona XtraDB Cluster 8.0.33-25 Update (2023-08-25)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.33-25.upd.html#known-issues","title":"Known issues","text":"If you use Galera Arbitrator (garbd), we recommend that you do not upgrade to 8.0.33 because garbd-8.0.33
may cause synchronization issues and extensive usage of CPU resources.
If you already upgraded to garbd-8.0.33
, we recommended downgrading to garbd-8.0.32-24-2
by performing the following steps:
Uninstall the percona-xtradb-cluster-garbd_8.0.33-25
package.
Download the percona-xtradb-cluster-garbd_8.0.32-24-2
package from Percona Software Downloads manually.
Install the percona-xtradb-cluster-garbd_8.0.32-24-2
package manually.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now
"},{"location":"release-notes/8.0.33-25.upd.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.34-26.html","title":"Percona XtraDB Cluster 8.0.34-26 (2023-11-01)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.34-26.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.34-26 (2023-09-26) release notes.
Percona XtraDB Cluster implements telemetry that fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the Telemetry on Percona XtraDB Cluster document.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.34 and included in Percona XtraDB Cluster are the following:
Adds mysql_binlog_open()
, mysql_binlog_fetch()
, and mysql_binlog_close()
functions to the libmysqlclient.so shared library. These functions enable developers to access a MySQL server binary log.
For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server is updated from OpenSSL 1.1.1 to OpenSSL 3.0.9.
The mysqlpump
client utility program is deprecated. The use of this program causes a warning. The mysqlpump
client may be removed in future releases. The applications that depend on mysqlpump
will use mysqldump
or MySQL Shell Utilities
.
The sync_relay_log_info
server system variable is deprecated. Using this variable or its equivalent startup --sync-relay-log-info
option causes a warning. This variable may be removed in future releases. The applications that use this variable should be rewritten not to depend on it before the variable is removed.
The binlog_format
server system variable is deprecated and may be removed in future releases. The functionality associated with this variable, which changes the binary logging format, is also deprecated.
When binlog_format
is removed, MySQL server supports only row-based binary logging. Thus, new installations should use only row-based binary logging. Migrate the existing installations that use the statement-based or mixed logging format to the row-based format.
The system variables log_bin_trust_function_creators
and log_statements_unsafe_for_binlog
used in the context of statement-based logging are also deprecated and may be removed in future releases.
Setting or selecting the values of deprecated variables causes a warning.
The mysql_native_password
authentication plugin is deprecated and may be removed in future releases. Using CREATE USER
, ALTER USER
, and SET PASSWORD
operations, insert a deprecation warning into the server error log if an account attempts to authenticate using mysql_native_password
as an authentication method.
The keyring_file
and keyring_encrypted_file
plugins are deprecated. These keyring plugins are replaced with the component_keyring_file
and component_keyring_encrypted_file
components.
Find the full list of bug fixes and changes in the MySQL 8.0.34 Release Notes.
"},{"location":"release-notes/8.0.34-26.html#bug-fixes","title":"Bug fixes","text":"PXC-4219: Starting a Percona XtraBackup process and issuing a START REPLICA
command simultaneously could deadlock the server.
PXC-4238: Running either the asynchronous_connection_failover_add_source
user defined function or the asynchronous_connection_failover_delete_source
user defined function generated an errant transaction, which could prevent a failover in the future.
PXC-4255: Running ALTER USER/SET PASSWORD
and FLUSH PRIVILEGES
simultaneously on different Percona XtraDB Cluster nodes stalled the cluster.
PXC-4284: If a MySQL user was not created before the GRANT option, the Percona XtraDB Cluster node was disconnected and needed a complete state transfer (SST).
PXC-4288: Galera Arbitrator (garbd) used 100% CPU.
PXC-4302: The GRANT statement could be replicated in a wrong way if partial_revokes=1
was enabled.
PXC-4310: A warning message had an incorrect link.
PXC-4296: The garbd 8.0.33 reported a wrong version.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.34-26.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.35-27.html","title":"Percona XtraDB Cluster 8.0.35-27 (2024-01-17)","text":"Get started with Quickstart Guide for Percona XtraDB Cluster.
Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.35-27.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.35-27 (2023-12-27) release notes.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.35 and included in Percona XtraDB Cluster are the following:
A future release may remove deprecated variables and options. The usage of these deprecated items may cause a warning. We recommend migrating from deprecated variables and options as soon as possible.
This release deprecates the following variables and options:
The binlog_transaction_dependency_tracking
server system variable
The old
and new
server system variables
The --character-set-client-handshake
server variable
INFORMATION_SCHEMA.PROCESSLIST
The implementation of the SHOW PROCESSLIST
command that uses the INFORMATION_SCHEMA.PROCESSLIST
table
The performance_schema_show_processlist
variable
Find the full list of bug fixes and changes in the MySQL 8.0.35 Release Notes.
"},{"location":"release-notes/8.0.35-27.html#bug-fixes","title":"Bug fixes","text":"PXC-4343: The table spaces were corrupted during SST that caused the Xtrabackup failure with the Header page contains inconsistent data in datafile
error (Thanks to Andrew Garner for his help in fixing this issue.)
PXC-4336: The Percona XtraDB Cluster node disconnected from the cluster due to CHECK CONSTRAINT.
PXC-4332: The Percona XtraDB Cluster node disconnected from the cluster if the local variable was changed at the session level.
PXC-4318: The Percona XtraDB Cluster node can serve as an async replica for another master node. However, when the same row was modified on both the Percona XtraDB Cluster node and the master node, the Percona XtraDB Cluster node got stuck due to replication conflicts.
PXC-4317: On newer platforms like AlmaLinux, adding a new node to an existing cluster was unsuccessful because the readlink command used during the SST process on joiner failed (Thanks to Mikael Gbai for reporting this issue.)
PXC-4315: The logs like MDL conflict ... solved by abort
were printed, but no transaction was aborted (Thanks to Arkadiusz Petruczynik for reporting this issue.)
PXC-4312: When DROP EVENT IF EXISTS was executed for non existing event, the event was binlogged with the GTID containing UUID of local server instead of global cluster-wide UUID.
PXC-4298: The node was disconnected when using ALTER TABLE
, including ADD UNIQUE
in the table containing duplicate entries (Thanks to Vit Novak for reporting this issue.)
PXC-4237: wsrep_sst_xtrabackup-v2
failed when adding a new node.
PXC-4179: The wsrep applier threads and rollbacker threads were not reported by performance_schema.processlist
.
PXC-4034: The usage of sql_log_bin=0
broke GTID consistency.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.35-27.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/8.0.36-28.html","title":"Percona XtraDB Cluster 8.0.36-28 (2024-04-03)","text":"Get started with Quickstart Guide for Percona XtraDB Cluster.
Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/8.0.36-28.html#release-highlights","title":"Release highlights","text":"Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.36-28 (2024-03-04) release notes.
Improvements and bug fixes introduced by Oracle for MySQL 8.0.36 and included in Percona XtraDB Cluster are the following:
The hashing algorithm employed yielded poor performance when using a HASH field to check for uniqueness. (Bug #109548, Bug #34959356)
All statement instrument elements that begin with statement/sp/%
, except statement/sp/stmt
, are disabled by default.
Find the complete list of bug fixes and changes in the MySQL 8.0.36 Release Notes.
"},{"location":"release-notes/8.0.36-28.html#bug-fixes","title":"Bug fixes","text":"PXC-4316: If the node shut down while being partitioned from the cluster, started again, and then rejoined the cluster, the other part of the cluster would still wait for the partitioned node.
PXC-4341: When running FLUSH TABLES
after a statement was prepared, the node could exit due to broken consistency.
PXC-4348: The joiner node exited with Metadata Lock BF-BF
conflict during IST
.
PXC-4362: The node could leave the cluster when binary logging was enabled and the function was created without super privilege.
PXC-4365: The node could leave the cluster when the row size was too large and had more than three nvarchar
columns.
PXC-4340: The server exited when executing the complicated query with 9 CTEs.
PXC-4367: The InnoDB semaphore wait timeout caused a server exit under a heavy load.
PXC-4277: A three-node Percona XtraDB Cluster cluster was in an inconsistent state with ALTER .. ALGORITHM=INPLACE
.
Install Percona XtraDB Cluster
The Percona XtraDB Cluster GitHub location
Download product binaries, packages, and tarballs at Percona Product Downloads
Contribute to the documentation
For training, contact Percona Training - Start learning now.
"},{"location":"release-notes/8.0.36-28.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html","title":"Percona XtraDB Cluster 8.0.18-9.3","text":"Date
April 29, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.18-9.3 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.18-9 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#improvements","title":"Improvements","text":"PXC-2495: Modified documentation for wsrep_sst_donor to include results when IP address is used
PXC-3002: Enhanced service_startup_timeout options to allow it to be disabled
PXC-2331: Modified the SST process to run mysql_upgrade
PXC-2991: Enhanced Strict Mode Processing to handle Group Replication Plugin
PXC-2985: Enabled Service for Automated Startup on Reboot with valid grastate.dat
PXC-2980: Modified Documentation to include AutoStart Up Process after Installation
PXC-2722: Enabled Support for Percona XtraBackup (PXB) 8.0.8 in Percona XtraDB Cluster (PXC) 8.0
PXC-2602: Added Ability to Configure xbstream options with wsrep_sst_xtrabackup
PXC-2455: Implemented the use of Percona XtraBackup (PXB) 8.0.5 in Percona XtraDB Cluster (PXC) 8.0
PXC-2259: Updated wsrep-files-index.htrml to include new files created by Percona XtraDB Cluster (PXC)
PXC-2197: Modified SST Documentation to Include Package Dependencies for Percona XtraBackup (PXB)
PXC-2194: Improvements to the PXC upgrade guide
PXC-2191: Revised Documentation on innodb_deadlock to Clarify Cluster Level Deadlock Processing
PXC-3017: Remove these SST encryption methods. encrypt=1, encrypt=2, and encrypt=3
PXC-2189: Modified Reference Architecture for Percona XtraDB Cluster (PXC) to include ProxySQL
PXC-2537: Modified mysqladmin password command to prevent node crash
PXC-2958: Modified User Documentation to include wsrep_certification_rules and cert.optimistic_pa
PXC-2045: Removed debian.cnf reference from logrotate/logcheck configuration Installed on Xenial/Stretch
PXC-2292: Modified Processing to determine Type of Key Cert when IST/SST
PXC-2974: Modified Percona XtraDB Cluster (PXC) Dockerfile to Integrate Galera wsrep recovery Process
PXC-3145: When the joiner fails during an SST, the mysqld process stays around (doesn\u2019t exit)
PXC-3128: Removed Prior Commit to Allow High Priority High Transaction Processing
PXC-3076: Modified Galera build to remove python3 components
PXC-2912: Modified netcat Configuration to Include -N Flag on Donor
PXC-2476: Modified process to determine and process IST or SST and with keyring_file processing
PXC-2204: Modified Shutdown using systemd after Bootstrap to provide additional messaging
PXB-2142: Transition key was written to backup / stream
PXC-2969: Modified pxc_maint_transition_period Documentation to Include Criteria for Use
PXC-2978: Certificate Information not Displayed when pxc-encrypt-cluster-traffic=ON
PXC-3039: No useful error messages if an SSL-disabled node tries to join SSL-enabled cluster
PXC-3043: Update required donor version to PXC 5.7.28
PXC-3063: Data at Rest Encryption not Encrypting Record Set Cache
PXC-3092: Abort startup if keyring is specified but cluster traffic encryption is turned off
PXC-3093: Garbd logs Completed SST Transfer Incorrectly (Timing is not correct)
PXC-3159: Killing the Donor or Connection lost during SST Process Leaves Joiner Hanging
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html","title":"Percona XtraDB Cluster 8.0.19-10","text":"Date
June 18, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.19-10 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.19-10 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#improvements","title":"Improvements","text":"PXC-2189: Modify Reference Architecture for Percona XtraDB Cluster (PXC) to include ProxySQL
PXC-3182: Modify processing to not allow writes on 8.0 nodes while 5.7 nodes are still on the cluster
PXC-3187: Add dependency package installation note in PXC binary tarball installation doc.
PXC-3138: Document mixed cluster write (PXC8 while PXC5.7 nodes are still part of the cluster) should not be completed.
PXC-3066: Document that pxc-encrypt-cluster-traffic=OFF is not just about traffic encryption
PXC-2993: Document the dangers of running with strict mode disabled and Group Replication at the same time
PXC-2980: Modify Documentation to include AutoStart Up Process after Installation
PXC-2604: Modify garbd processing to support Operator
PXC-3298: Correct galera_var_reject_queries test to remove display value width
PXC-3320: Correction on PXC installation doc
PXC-3270: Modify wsrep_ignore_apply_errors variable default to restore 5.x behavior
PXC-3179: Correct replication of CREATE USER \u2026 RANDOM PASSWORD
PXC-3080: Modify to process the ROTATE_LOG_EVENT synchronously to perform proper cleanup
PXC-2935: Remove incorrect assertion when \u2013thread_handling=pool-of-threads is used
PXC-2500: Modify ALTER USER processing when executing thread is Galera applier thread to correct assertion
PXC-3234: Correct documentation link in spec file
PXC-3204: Modify to set wsrep_protocol_version correctly when wsrep_auto_increment_control is disabled
PXC-3189: Correct SST processing for super_read_only
PXC-3184: Modify startup to correct crash when socat not found and SST Fails
PXC-3169: Modify wsrep_reject_queries to enhance error messaging
PXC-3165: Allow COM_FIELD_LIST to be executed when WSREP is not ready
PXC-3145: Modify to end mysqld process when the joiner fails during an SST
PXC-3043: Update required donor version to PXC 5.7.28 (previously was Known Issue)
PXC-3036: Document correct method for starting, stopping, bootstrapping
PXC-3287: Correct link displayed on help client command
PXC-3031: Modify processing for garbd to prevent issues when multiple requests are started at approximately the same time and request an SST transfers to prevent SST from hanging
PXC-3039: No useful error messages if an SSL-disabled node tries to join SSL-enabled cluster
PXC-3092: Abort startup if keyring is specified but cluster traffic encryption is turned off
PXC-3093: Garbd logs Completed SST Transfer Incorrectly (Timing is not correct)
PXC-3159: Killing the Donor or Connection lost during SST Process Leaves Joiner Hanging
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html","title":"Percona XtraDB Cluster 8.0.20-11.2","text":"Date
October 9, 2020
Installation
Installing Percona XtraDB Cluster
This release fixes the security vulnerability CVE-2020-15180
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html","title":"Percona XtraDB Cluster 8.0.20-11.3","text":"Date
October 22, 2020
Installation
Installing Percona XtraDB Cluster
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html","title":"Percona XtraDB Cluster 8.0.20-11","text":"Date
October 1, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.20-11 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.20-11 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#improvements","title":"Improvements","text":"PXC-3159: Modify error handling to close the communication channels and abort the joiner node when donor crashes (previously was Known Issue)
PXC-3352: Modify wsrep_row_upd_check_foreign_constraints() to remove the check for DELETE
PXC-3371: Fix Directory creation in build-binary.sh
PXC-3370: Provide binary tarball with shared libs and glibc suffix & minimal tarballs
PXC-3360: Update sysbench commands in PXC-ProxySQL configuration doc page
PXC-3312: Prevent cleanup of statement diagnostic area in case of transaction replay.
PXC-3167: Correct GCache buffer repossession processing
PXC-3347: Modify PERCONA_SERVER_EXTENSION for bintarball and modify MYSQL_SERVER_SUFFIX
PXC-3039: No useful error messages if an SSL-disabled node tries to join SSL-enabled cluster
PXC-3092: Log warning at startup if keyring is specified but cluster traffic encryption is turned off
PXC-3093: Garbd logs Completed SST Transfer Incorrectly (Timing is not correct)
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html","title":"Percona XtraDB Cluster 8.0.21-12.1","text":"Date
December 28, 2020
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.21-12.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.21-12 for more details on these changes.
Implement an inconsistency voting policy. In the best case scenario, the node with the inconsistent data is aborted and the cluster continues to operate.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#improvements","title":"Improvements","text":"PXC-3353: Modify error handling in Garbd when donor crashes during SST or when an invalid donor name is passed to it
PXC-3468: Resolve package conflict when installing PXC 5.7 on RHEL/CentOS8
PXC-3418: Prevent DDL-DML deadlock by making in-place ALTER take shared MDL for the whole duration.
PXC-3416: Fix memory leaks in garbd when started with invalid group name
PXC-3445: Correct MTR test failures
PXC-3442: Fix crash when log_slave_updates=ON and consistency check statement is executed
PXC-3424: Fix error handling when the donor is not able to serve SST
PXC-3404: Fix memory leak in garbd while processing CC actions
PXC-3191: Modify Read-Only checks on wsrep_* tables when in super_read_only
PXC-3039: No useful error messages if an SSL-disabled node tries to join an SSL-enabled cluster
PXC-3092: Log a warning at startup if a keyring is specified but cluster traffic encryption is turned off
PXC-3093: Completed SST Transfer incorrectly logged by garbd (Timing is not correct)
PXC-3159: Modify the error handling to close the communication channels and abort the joiner node when the donor crashes
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html","title":"Percona XtraDB Cluster 8.0.22-13.1","text":"Date
March 22, 2021
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster 8.0.22-13.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.22-13 for more details on these changes.
This release fixes security vulnerability CVE-2021-27928, a similar issue to CVE-2020-15180
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#improvements","title":"Improvements","text":"PXC-3575: Implement package changes for SELinux and AppArmor
PXC-3115: Create Default SELinux and AppArmor policy
PXC-3536: Modify processing to not allow threads/queries to be killed if the thread is in TOI
PXC-3565: Correct Performance of SELECT in PXC
PXC-3502: Correct condition in thd_binlog_format() function for List Index process (Thanks to user Pawe\u0142 Bromboszcz for reporting this issue)
PXC-3501: Modify wsrep_row_upd_check_foreign_constraints() to include foreign key dependencies in the writesets for DELETE query (Thanks to user Steven Gales for reporting this issue)
PXC-2913: Correct MDL locks assertion when wsrep provider is unloaded
PXC-3475: Adjust mysqld_safe script to parse 8.0 log style properly
PXC-3039: No useful error messages if an SSL-disabled node tries to join an SSL-enabled cluster
PXC-3092: Log a warning at startup if a keyring is specified, but cluster traffic encryption is turned off
PXC-3093: Completed SST Transfer incorrectly logged by garbd (Timing is not correct)
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html","title":"Percona XtraDB Cluster 8.0.23-14.1","text":"Date
June 9, 2021
Installation
Installing Percona XtraDB Cluster.
Percona XtraDB Cluster 8.0.23-14.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.23-14 for more details on these changes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#improvements","title":"Improvements","text":"PXC-3464: Data is not propagated with SET SESSION sql_log_bin = 0
PXC-3146: Galera/SST is not looking for the default data directory location for SSL certs
PXC-3226: Results from CHECK TABLE from PXC server can cause the client libraries to crash
PXC-3381: Modify GTID functions to use a different char set
PXC-3437: Node fails to join in the endless loop
PXC-3446: Memory leak during server shutdown
PXC-3538: Garbd crashes after successful backup
PXC-3580: Aggressive network outages on one node makes the whole cluster unusable
PXC-3596: Node stuck in aborting SST
PXC-3645: Deadlock during ongoing transaction and RSU
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html","title":"Percona XtraDB Cluster 8.0.25-15.1","text":"Date
November 22, 2021
Installation
Installing Percona XtraDB Cluster.
Percona XtraDB Cluster 8.0.25-15.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.25-15 for more details on these changes.
Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#release-highlights","title":"Release Highlights","text":"A Non-Blocking Operation method for online schema changes in Percona XtraDB Cluster. This mode is similar to the Total Order Isolation (TOI) mode, whereas a data definition language (DDL) statement (for example, ALTER
) is executed on all nodes in sync. The difference is that in the NBO mode, the DDL statement acquires a metadata lock that locks the table or schema at a late stage of the operation, which is a more efficient locking strategy.
Note that the NBO mode is a Tech Preview feature. We do not recommend that you use this mode in a production environment. For more information, see Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU).
The notable changes and bug fixes introduced by Oracle MySQL include the following:
The sql_slave_skip_counter
variable only counts the events in the uncompressed transaction payloads.
A possible deadlock occurred when system variables, read by different clients, were being updated and the binary log file was rotated.
Sometimes the aggregate function results could return values from a previous statement when using a prepared SELECT
statement with a WHERE
clause that is always false.
For more information, see the MySQL 8.0.24 Release Notes and the MySQL 8.0.25 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#new-features","title":"New Features","text":"PXC-3275: Fix the documented APT package list to match the packages listed in the Repo. (Thanks to user Hubertus Krogmann for reporting this issue)
PXC-3387: Performing an intermediate commit does not call wsrep commit hooks.
PXC-3449: Fix for missing dependencies which were carried out in replication writesets caused Galera to fail.
PXC-3589: Documentation: Updates in Percona XtraDB Cluster Limitations that the LOCK=NONE
clause is no longer allowed in an INPLACE ALTER TABLE statement. (Thanks to user Brendan Byrd for reporting this issue)
PXC-3611: Fix that deletes any keyring.backup file if it exists for SST operation.
PXC-3608: Fix a concurrency issue that caused a server exit when attempting to read a foreign key.
PXC-3637: Changes the service start sequence to allow more time for mounting local or remote directories with large amounts of data. (Thanks to user Eric Gonyea for reporting this issue)
PXC-3679: Fix for SST failures after the update of socat to \u20181.7.4.0\u2019.
PXC-3706: Fix adds a wait to wsrep_after_commit
until the first thread in a group commit queue is available.
PXC-3729: Fix for conflicts when multiple applier threads execute certified transactions and are in High-Priority transaction mode.
PXC-3731: Fix for incorrect writes to the binary log when sql_log_bin=0
.
PXC-3733: Fix to clean the WSREP transaction state if a transaction is requested to be re-prepared.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html","title":"Percona XtraDB Cluster 8.0.26-16.1","text":"Date
January 17, 2022
Installation
Installing Percona XtraDB Cluster
Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#release-highlights","title":"Release Highlights","text":"The following are a number of the notable fixes for MySQL 8.0.26, provided by Oracle, and included in this release:
The TLSv1 and TLSv1.1 connection protocols are deprecated.
Identifiers with specific terms, such as \u201cmaster\u201d or \u201cslave\u201d are deprecated and replaced. See the Functionality Added or Changed section in the 8.0.26 Release Notes for a list of updated identifiers. The following terms have been changed:
The identifier master
is changed to source
The identifier slave
is changed to replica
The identifier multithreaded slave
(mts
) is changed to multithreaded applier
(mta
)
When using semisynchronous replication, either the old version or the new version of system variables and status variables are available. You cannot have both versions installed on an instance. The old system variables are available when you use the old version, but the new ones are not. The new system variables are available when you use the new version, but the old values are not.
In an upgrade from an earlier version to 8.0.26, enable the rpl_semi_sync_source
plugin and the rpl_semi_sync_replica
plugin after the upgrade has been completed. Enabling these plugins before all of the nodes are upgraded may cause data inconsistency between the nodes.
For the source, the rpl_semi_sync_master
plugin (seminsync_master.so
library) is the old version and the rpl_semi_sync_source
plugin(semisync_source.so
library) is the new version.
For the client, the rpl_semi_sync_slave
plugin (semisync_slave.so
library) is the old version and the rpl_semi_sync_replica
plugin (semisync_replica.so
library) is the new version
For more information, see the MySQL 8.0.26 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3824: An incorrect directive in Systemd Unit File (Thanks to Jim Lohiser for reporting this issue)
PXC-3706: A fix for a race condition in group commit queue (Thanks to Kevin Sauter for reporting this issue)
PXC-3739: The FLUSH TABLES FOR EXPORT
lock is released when the session ends.
PXC-3628: The server allowed altering the storage engine to MyISAM
for mysql.wsrep_* tables.
PXC-3731: A fix for when the user deletes data from the source but does not want that data deleted from the replica. The sql_log_bin=0
command had no effect and the deleted rows were replicated and written into the binary log.
PXC-3857: The following system variables are renamed. The old variables are deprecated and may be removed in a future version.
wsrep_slave_threads
renamed as wsrep_applier_threads
wsrep_slave_FK_checks
renamed as wsrep_applier_FK_checks
wsrep_slave_UK_checks
renamed as wsrep_applier_UK_checks
wsrep_restart_slave
renamed as wsrep_restart_replica
PXC-3039: No useful error messages if an SSL-disabled node tried to join an SSL-enabled cluster
PXC-3093: A completed SST Transfer is incorrectly logged by garbd. The timing is incorrect.
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html","title":"Percona XtraDB Cluster 8.0.27-18.1","text":"Date: April 11, 2022
Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#release-highlights","title":"Release Highlights","text":"The following lists a number of the bug fixes for MySQL 8.0.27, provided by Oracle, and included in Percona Server for MySQL:
The default_authentication_plugin
is deprecated. Support for this plugin may be removed in future versions. Use the authentication_policy
variable.
The binary
operator is deprecated. Support for this operator may be removed in future versions. Use CAST(... AS BINARY)
.
Fix for when a parent table initiates a cascading SET NULL
operation on the child table, the virtual column can be set to NULL instead of the value derived from the parent table.
Find the full list of bug fixes and changes in the MySQL 8.0.27 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3831: Allowed certified high-priority transactions to proceed without lock conflicts.
PXC-3766: Stopped every XtraBackup-based SST operation from executing the version-check procedure.
PXC-3704: Based the maximum writeset size on repl.max_ws_size
when both repl.max_ws_size
and wsrep_max_ws_size
values are passed during startup.
The Percona XtraDB Cluster installation instructions
The Percona XtraDB Cluster downloads
The Percona XtraDB Cluster GitHub location
To contribute to the documentation, review the Documentation Contribution Guide
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html","title":"Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)","text":"Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#release-highlights","title":"Release Highlights","text":"Improvements and bug fixes introduced by Oracle for MySQL 8.0.28 and included in Percona Server for MySQL are the following:
The ASCII
shortcut for CHARACTER SET latin1
and UNICODE
shortcut for CHARACTER SET ucs2
are deprecated and raise a warning to use CHARACTER SET
instead. The shortcuts will be removed in a future version.
A stored function and a loadable function with the same name can share the same namespace. Add the schema name when invoking a stored function in the shared namespace. The server generates a warning when function names collide.
InnoDB supports ALTER TABLE ... RENAME COLUMN
operations when using ALGORITHM=INSTANT
.
The limit for innodb_open_files
now includes temporary tablespace files. The temporary tablespace files were not counted in the innodb_open_files
in previous versions.
Find the full list of bug fixes and changes in the MySQL 8.0.28 Release Notes.
"},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#bugs-fixed","title":"Bugs Fixed","text":"PXC-3923: When the read_only
or super_read_only
option was set, the ANALYZE TABLE
command removed the node from the cluster.
PXC-3388: Percona XtraDB Cluster stuck in a DESYNCED state after joiner was killed.
PXC-3609: The binary log status variables were updated when the binary log was disabled. Now the status variables are not registered when the binary log is disabled. (Thanks to Stofa Kenida for reporting this issue.)
PXC-3848: The cluster node exited when the CURRENT_USER()
function was used. (Thanks to Steffen B\u00f6hme for reporting this issue.)
PXC-3872: A user without system_user privilege was able to drop system users. (Thanks to user jackc for reporting this issue.)
PXC-3918: Galera Arbitrator (garbd) could not connect if the Percona XtraDB Cluster server used encrypted connections. The issue persisted even when the proper certificates were specified.
PXC-3924: Using TRUNCATE TABLE X
and INSERT INTO X
options when the foreign keys were disabled and violated caused the HA_ERR_FOUND_DUPP_KEY
error on a slave node. (Thanks to Daniel Barton\u00ed\u010dek for reporting this issue.)
PXC-3062: The wsrep_incoming_addresses
status variable did not contain the garbd IP address.
The Percona XtraDB Cluster installation instructions
The Percona XtraDB Cluster downloads
The Percona XtraDB Cluster GitHub location
To contribute to the documentation, review the Documentation Contribution Guide
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes/release-notes_index.html","title":"Percona XtraDB Cluster 8.0 release notes index","text":"Percona XtraDB Cluster 8.0.36-28 (2024-04-03)
Percona XtraDB Cluster 8.0.35-27 (2024-01-17)
Percona XtraDB Cluster 8.0.34-26 (2023-11-01)
Percona XtraDB Cluster 8.0.33-25 Update (2023-08-25)
Percona XtraDB Cluster 8.0.33-25 (2023-08-02)
Percona XtraDB Cluster 8.0.32-24.2 (2023-05-24)
Percona XtraDB Cluster 8.0.32-24 (2023-04-18)
Percona XtraDB Cluster 8.0.31-23.2 (2023-04-04)
Percona XtraDB Cluster 8.0.31-23 (2023-03-14)
Percona XtraDB Cluster 8.0.30-22 (2022-12-28)
Percona XtraDB Cluster 8.0.29-21 (2022-09-12)
Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)
Percona XtraDB Cluster 8.0.27-18.1 (2022-04-11)
Percona XtraDB Cluster 8.0.26-16.1 (2022-01-17)
Percona XtraDB Cluster 8.0.25-15.1 (2021-11-22)
Percona XtraDB Cluster 8.0.23-14.1 (2021-06-09)
Percona XtraDB Cluster 8.0.22-13.1 (2021-03-22)
Percona XtraDB Cluster 8.0.21-12.1 (2020-12-28)
Percona XtraDB Cluster 8.0.20-11.3 (2020-10-22)
Percona XtraDB Cluster 8.0.20-11.2 (2020-10-09)
Percona XtraDB Cluster 8.0.20-11 (2020-10-01)
Percona XtraDB Cluster 8.0.19-10 (2020-06-18)
Percona XtraDB Cluster 8.0.18-9.3 (2020-04-29)
If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"}]} \ No newline at end of file diff --git a/8.0/sitemap.xml b/8.0/sitemap.xml index 0afa2a10..2b292065 100644 --- a/8.0/sitemap.xml +++ b/8.0/sitemap.xml @@ -2,412 +2,412 @@