diff --git a/8.0/release-notes/8.0.36-28.html b/8.0/release-notes/8.0.36-28.html index 4a949671..b3e00a4f 100644 --- a/8.0/release-notes/8.0.36-28.html +++ b/8.0/release-notes/8.0.36-28.html @@ -2652,6 +2652,9 @@

Bug fixesPXC-4367: The InnoDB semaphore wait timeout caused a server exit under a heavy load.

+
  • +

    PXC-4277: A three-node Percona XtraDB Cluster cluster was in an inconsistent state with ALTER .. ALGORITHM=INPLACE.

    +
  • Install Percona XtraDB Cluster

    @@ -2673,7 +2676,7 @@

    Get expert help Last update: - 2024-04-04 + 2024-04-08 diff --git a/8.0/search/search_index.json b/8.0/search/search_index.json index a4422e60..5880fdcc 100644 --- a/8.0/search/search_index.json +++ b/8.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Percona XtraDB Cluster 8.0 Documentation","text":"

    This documentation is for the latest release: Percona XtraDB Cluster 8.0.36-28 (Release Notes).

    Percona XtraDB Cluster is a database clustering solution for MySQL. It ensures high availability, prevents downtime and data loss, and provides linear scalability for a growing environment.

    "},{"location":"index.html#features-of-percona-xtradb-cluster","title":"Features of Percona XtraDB Cluster","text":"Feature Details Synchronous replication Data is written to all nodes simultaneously, or not written at all in case of a failure even on a single node Multi-source replication Any node can trigger a data update. True parallel replication Multiple threads on replica performing replication on row level Automatic node provisioning You simply add a node and it automatically syncs. Data consistency No more unsynchronized nodes. PXC Strict Mode Avoids the use of tech preview features and unsupported features Configuration script for ProxySQL Percona XtraDB Cluster includes the proxysql-admin tool that automatically configures Percona XtraDB Cluster nodes using ProxySQL. Automatic configuration of SSL encryption Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic variable that enables automatic configuration of SSL encryption Optimized Performance Percona XtraDB Cluster performance is optimized to scale with a growing production workload

    Percona XtraDB Cluster 8.0 is fully compatible with MySQL Server Community Edition 8.0 and Percona Server for MySQL 8.0. The cluster has the following compatibilities:

    See also

    Overview of changes in the most recent PXC release

    "},{"location":"index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"add-node.html","title":"Add nodes to cluster","text":"

    New nodes that are properly configured are provisioned automatically. When you start a node with the address of at least one other running node in the wsrep_cluster_address variable, this node automatically joins and synchronizes with the cluster.

    Note

    Any existing data and configuration will be overwritten to match the data and configuration of the DONOR node. Do not join several nodes at the same time to avoid overhead due to large amounts of traffic when a new node joins.

    Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer and the wsrep_sst_method variable is always set to xtrabackup-v2.

    "},{"location":"add-node.html#start-the-second-node","title":"Start the second node","text":"

    Start the second node using the following command:

    [root@pxc2 ~]# systemctl start mysql\n

    After the server starts, it receives SST automatically.

    To check the status of the second node, run the following:

    mysql@pxc2> show status like 'wsrep%';\n
    Expected output
    +----------------------------------+--------------------------------------------------+\n| Variable_name                    | Value                                            |\n+----------------------------------+--------------------------------------------------+\n| wsrep_local_state_uuid           | a08247c1-5807-11ea-b285-e3a50c8efb41             |\n| ...                              | ...                                              |\n| wsrep_local_state                | 4                                                |\n| wsrep_local_state_comment        | Synced                                           |\n| ...                              |                                                  |\n| wsrep_cluster_size               | 2                                                |\n| wsrep_cluster_status             | Primary                                          |\n| wsrep_connected                  | ON                                               |\n| ...                              | ...                                              |\n| wsrep_provider_capabilities      | :MULTI_MASTER:CERTIFICATION: ...                 |\n| wsrep_provider_name              | Galera                                           |\n| wsrep_provider_vendor            | Codership Oy <info@codership.com>                |\n| wsrep_provider_version           | 4.3(r752664d)                                    |\n| wsrep_ready                      | ON                                               |\n| ...                              | ...                                              | \n+----------------------------------+--------------------------------------------------+\n75 rows in set (0.00 sec)\n

    The output of SHOW STATUS shows that the new node has been successfully added to the cluster. The cluster size is now 2 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.

    If the state of the second node is Synced as in the previous example, then the node received full SST is synchronized with the cluster, and you can proceed to add the next node.

    Note

    If the state of the node is Joiner, it means that SST hasn\u2019t finished. Do not add new nodes until all others are in Synced state.

    "},{"location":"add-node.html#starting-the-third-node","title":"Starting the Third Node","text":"

    To add the third node, start it as usual:

    [root@pxc3 ~]# systemctl start mysql\n

    To check the status of the third node, run the following:

    mysql@pxc3> show status like 'wsrep%';\n

    The output shows that the new node has been successfully added to the cluster. Cluster size is now 3 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.

    Expected output
    +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ...                        | ...                                  |\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n| ...                        | ...                                  |\n| wsrep_cluster_size         | 3                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n| ...                        | ...                                  |\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"add-node.html#next-steps","title":"Next steps","text":"

    When you add all nodes to the cluster, you can verify replication by running queries and manipulating data on nodes to see if these changes are synchronized across the cluster.

    "},{"location":"add-node.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"apparmor.html","title":"Enable AppArmor","text":"

    Percona XtraDB Cluster contains several AppArmor profiles. Multiple profiles allow for easier maintenance because the mysqld profile is decoupled from the SST script profile. This separation allows the introduction of other SST methods or scripts with their own profiles.

    The following profiles are available:

    The mysqld profile allows the execution of the SST script in PUx mode with the /{usr/}bin/wsrep_sst_*PUx command. The profile is applied if the script contains a profile. The SST script runs in unconfined mode if the script does not contain a profile. The system administrator can change the execution mode to Pix. This action causes a fallback to inherited mode in case the SST script profile is absent.

    "},{"location":"apparmor.html#profile-adjustments","title":"Profile adjustments","text":"

    The mysqld profile and the SST script profile can be adjusted, such as moving the data directory, in the same way as modifying the mysqld profile in Percona Server.

    "},{"location":"apparmor.html#work-with-pxc_encrypt_cluster_traffic","title":"Work with pxc_encrypt_cluster_traffic","text":"

    By default, the pxc_encrypt_cluster_traffic is ON, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.

    Set up the certificates describes the certificate setup.

    The following AppArmor profile rule grants access to certificates located in /etc/mysql/certs. You must be root or have sudo privileges.

    # Allow config access\n  /etc/mysql/** r,\n

    This rule is present in both profiles (usr.sbin.mysqld and usr.bin.wsrep_sst_xtrabackup-v2). The rule allows the administrator to store the certificates anywhere inside of the /etc/mysql/ directory. If the certificates are located outside of the specified directory, you must add an additional rule which allows access to the certificates in both profiles. The rule must have the path to the certificates location, like the following:

    # Allow config access\n  /path/to/certificates/* r,\n

    The server certificates must be accessible to the mysql user and are readable only by this user.

    "},{"location":"apparmor.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"apt.html","title":"Install Percona XtraDB Cluster on Debian or Ubuntu","text":"

    Specific information on the supported platforms, products, and versions is described in Percona Software and Platform Lifecycle.

    The packages are available in the official Percona software repository and on the download page. It is recommended to install Percona XtraDB Cluster from the official repository using APT.

    We gather Telemetry data in the Percona packages and Docker images.

    "},{"location":"apt.html#prerequisites","title":"Prerequisites","text":"

    See also

    For more information, see Enabling AppArmor.

    "},{"location":"apt.html#install-from-repository","title":"Install from Repository","text":"
    1. Update the sytem:

      sudo apt update\n
    2. Install the necessary packages:

      sudo apt install -y wget gnupg2 lsb-release curl\n
    3. Download the repository package

      wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb\n
    4. Install the package with dpkg:

      sudo dpkg -i percona-release_latest.generic_all.deb\n
    5. Refresh the local cache to update the package information:

      sudo apt update\n
    6. Enable the release repository for Percona XtraDB Cluster:

      sudo percona-release setup pxc80\n
    7. Install the cluster:

      sudo apt install -y percona-xtradb-cluster\n

    During the installation, you are requested to provide a password for the root user on the database node.

    Note

    If needed, you could also install the percona-xtradb-cluster-full meta-package, which includes the following additional packages:

    "},{"location":"apt.html#next-steps","title":"Next steps","text":"

    After you install Percona XtraDB Cluster and stop the mysql service, configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.

    "},{"location":"apt.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"bootstrap.html","title":"Bootstrap the first node","text":"

    After you configure all PXC nodes, initialize the cluster by bootstrapping the first node. The initial node must contain all the data that you want to be replicated to other nodes.

    Bootstrapping implies starting the first node without any known cluster addresses: if the wsrep_cluster_address variable is empty, Percona XtraDB Cluster assumes that this is the first node and initializes the cluster.

    Instead of changing the configuration, start the first node using the following command:

    [root@pxc1 ~]# systemctl start mysql@bootstrap.service\n

    When you start the node using the previous command, it runs in bootstrap mode with wsrep_cluster_address=gcomm://. This tells the node to initialize the cluster with wsrep_cluster_conf_id variable set to 1. After you add other nodes to the cluster, you can then restart this node as normal, and it will use standard configuration again.

    Note

    A service started with mysql@bootstrap must be stopped using the same command. For example, the systemctl stop mysql command does not stop an instance started with the mysql@bootstrap command.

    To make sure that the cluster has been initialized, run the following:

    mysql@pxc1> show status like 'wsrep%';\n

    The output shows that the cluster size is 1 node, it is the primary component, the node is in the Synced state, it is fully connected and ready for write-set replication.

    Expected output
    +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ...                        | ...                                  |\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n| ...                        | ...                                  |\n| wsrep_cluster_size         | 1                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n| ...                        | ...                                  |\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"bootstrap.html#next-steps","title":"Next steps","text":"

    After initializing the cluster, you can add other nodes.

    "},{"location":"bootstrap.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"certification.html","title":"Certification in Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster replicates actions executed on one node to all other nodes in the cluster, and makes it fast enough to appear as if it is synchronous (virtually synchronous).

    The following types of actions exist:

    Note

    This manual page assumes the reader is aware of TOI and MySQL replication protocol.

    DML (INSERT, UPDATE, and DELETE) operations effectively change the state of the database, and all such operations are recorded in XtraDB by registering a unique object identifier (key) for each change (an update or a new addition).

    This ensures that there is quick and short meta information about the rows that this transaction has touched or modified. This information is passed on as part of the write-set for certification to all the nodes in the cluster while the transaction is in the commit phase.

    Changes made to database objects are bin-logged. This is similar to how MySQL does it for replication with its Source-Replica ecosystem, except that a packet of changes from a given transaction is created and named as a write-set.

    Once the client/user issues a COMMIT, Percona XtraDB Cluster will run a commit hook. Commit hooks ensure the following:

    There is an inherent assumption/protocol enforcement that all nodes read the packet from a channel in the same order, and that way even though each packet doesn\u2019t carry id information, it is inherently established using the locally maintained id value.

    "},{"location":"certification.html#common-situation","title":"Common situation","text":"

    The following example shows what happens in a common situation. act_id is incremented and assigned only for totally ordered actions, and only in primary state (skip messages while in state exchange).

    $ rcvd->id = ++group->act_id_;\n

    Note

    This is an amazing way to solve the problem of the id coordination in multi-source systems. Otherwise a node will have to first get an id from central system or through a separate agreed protocol, and then use it for the packet, thereby doubling the round-trip time.

    "},{"location":"certification.html#conflicts","title":"Conflicts","text":"

    The following happens if two nodes get ready with their packet at same time:

    1. Both nodes will be allowed to put the packet on the channel. That means the channel will see packets from different nodes queued one behind another.

    2. The following example shows what happens if two nodes modify same set of rows. Nodes are in sync until this point:

      $ create -> insert (1,2,3,4)\n
      • Node 1: update i = i + 10;

      • Node 2: update i = i + 100;

      Let\u2019s associate transaction ID (trx-id) for an update transaction that is executed on Node 1 and Node 2 in parallel. Although the real algorithm is more involved (with uuid + seqno), it is conceptually the same, so we are using trx_id.

      • Node 1: update action: trx-id=n1x

      • Node 2: update action: trx-id=n2x

      Both node packets are added to the channel, but the transactions are conflicting. The protocol says: FIRST WRITE WINS.

      So in this case, whoever is first to write to the channel will get certified. Let\u2019s say Node 2 is first to write the packet, and then Node 1 makes changes immediately after it.

      Note

      Each node subscribes to all packages, including its own package.

      • Node 2 will see its own packet and will process it. Then it will see the packet from Node 1, try to certify it, and fail.

      • Node 1 will see the packet from Node 2 and will process it.

      Note

      InnoDB allows isolation, so Node 1 can process packets from Node 2 independent of Node 1 transaction changes

      Then Node 1 will see its own packet, try to certify it, and fail.

      Note

      Even though the packet originated from Node 1, it will undergo certification to catch cases like these.

    "},{"location":"certification.html#resolve-certification-conflicts","title":"Resolve certification conflicts","text":"

    The certification protocol can be described using the previous example. The central certification vector (CCV) is updated to reflect reference transaction.

    Node 2 then gets the packet from Node 1 for certification. The packet key is already present in CCV, with the reference transaction set it to n2x, whereas write-set proposes setting it to n1x. This causes a conflict, which in turn causes the transaction from Node 1 to fail the certification test.

    Using the same case as explained above, Node 1 certification also rejects the packet from Node 1.

    This suggests that the node doesn\u2019t need to wait for certification to complete, but just needs to ensure that the packet is written to the channel. The applier transaction will always win and the local conflicting transaction will be rolled back.

    The following example shows what happens if one of the nodes has local changes that are not synced with the group:

    mysql> create (id primary key) -> insert (1), (2), (3), (4);\n
    Expected output
    node-1: wsrep_on=0; insert (5); wsrep_on=1\nnode-2: insert(5).\n

    The insert(5) statement will generate a write-set that will then be replicated to Node 1. Node 1 will try to apply it but will fail with duplicate-key-error, because 5 already exist.

    XtraDB will flag this as an error, which would eventually cause Node 1 to shutdown.

    "},{"location":"certification.html#increment-gtid","title":"Increment GTID","text":"

    GTID is incremented only when the transaction passes certification, and is ready for commit. That way errant packets don\u2019t cause GTID to increment.

    Also, group packet id is not confused with GTID. Without errant packets, it may seem that these two counters are the same, but they are not related.

    "},{"location":"certification.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"compile.html","title":"Compile and install from Source Code","text":"

    If you want to compile Percona XtraDB Cluster, you can find the source code on GitHub. Before you begin, make sure that the following packages are installed:

    apt yum Git git git SCons scons scons GCC gcc gcc g++ g++ gcc-c++ OpenSSL openssl openssl Check check check CMake cmake cmake Bison bison bison Boost libboost-all-dev boost-devel Asio libasio-dev asio-devel Async I/O libaio-dev libaio-devel ncurses libncurses5-dev ncurses-devel Readline libreadline-dev readline-devel PAM libpam-dev pam-devel socat socat socat curl libcurl-dev libcurl-devel

    You will likely have all or most of the packages already installed. If you are not sure, run one of the following commands to install any missing dependencies:

    To compile Percona XtraDB Cluster from source code:

    1. Clone the Percona XtraDB Cluster repository:

      $ git clone https://github.com/percona/percona-xtradb-cluster.git\n

      Important

      Clone the latest repository or update it to the latest state. Old codebase may not be compatible with the build script.

    2. Check out the 8.0 branch and initialize submodules:

      $ cd percona-xtradb-cluster\n$ git checkout 8.0\n$ git submodule update --init --recursive\n
    3. Download the matching Percona XtraBackup 8.0 tarball (*.tar.gz) for your operating system from Percona Downloads.

    The following example extract the Percona XtraBackup 8.0.32-25 tar.gz file to the target directory ./pxc-build:

    ```{.bash data-prompt=\"$\"}\n$ tar -xvf percona-xtrabackup-8.0.32-25-Linux-x86_64.glibc2.17.tar.gz -C ./pxc-build\n```\n
    1. Run the build script ./build-ps/build-binary.sh. By default, it attempts building into the current directory. Specify the target output directory, such as ./pxc-build:

      $ mkdir ./pxc-build\n$ ./build-ps/build-binary.sh ./pxc-build\n

    When the compilation completes, pxc-build contains a tarball, such as Percona-XtraDB-Cluster-8.0.x86_64.tar.gz, that you can deploy on your system.

    Note

    The exact version and release numbers may differ.

    "},{"location":"compile.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"configure-cluster-rhel.html","title":"Configure a cluster on Red Hat-based distributions","text":"

    This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Red Hat or CentOS 7 servers, using the packages from Percona repositories.

    "},{"location":"configure-cluster-rhel.html#prerequisites","title":"Prerequisites","text":"

    The procedure described in this tutorial requires the following:

    Different from previous versions

    The variable wsrep_sst_auth has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user mysql.pxc.internal.session. During SST, the user mysql.pxc.sst.user and the role mysql.pxc.sst.role are created on the donor node.

    "},{"location":"configure-cluster-rhel.html#step-1-installing-pxc","title":"Step 1. Installing PXC","text":"

    Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux or CentOS.

    "},{"location":"configure-cluster-rhel.html#step-2-configuring-the-first-node","title":"Step 2. Configuring the first node","text":"

    Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.

    1. Make sure that the configuration file /etc/my.cnf on the first node (percona1) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended.\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 1 address\nwsrep_node_address=192.168.70.71\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n
    2. Start the first node with the following command:

      [root@percona1 ~] # systemctl start mysql@bootstrap.service\n

      The previous command will start the cluster with initial wsrep_cluster_address variable set to gcomm://. If the node or MySQL are restarted later, there will be no need to change the configuration file.

    3. After the first node has been started, cluster status can be checked with the following command:

      mysql> show status like 'wsrep%';\n

      This output shows that the cluster has been successfully bootstrapped.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 1                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n

      Copy the automatically generated temporary password for the superuser account:

      $ sudo grep 'temporary password' /var/log/mysqld.log\n

      Use this password to log in as root:

      $ mysql -u root -p\n

      Change the password for the superuser account and log out. For example:

      mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'r00tP@$$';\n
      Expected output
      Query OK, 0 rows affected (0.00 sec)\n
    "},{"location":"configure-cluster-rhel.html#step-3-configuring-the-second-node","title":"Step 3. Configuring the second node","text":"
    1. Make sure that the configuration file /etc/my.cnf on the second node (percona2) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 2 address\nwsrep_node_address=192.168.70.72\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the second node with the following command:

      [root@percona2 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on both nodes. The following is an example of status from the second node (percona2):

      mysql> show status like 'wsrep%';\n

      The output shows that the new node has been successfully added to the cluster.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 2                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-rhel.html#step-4-configuring-the-third-node","title":"Step 4. Configuring the third node","text":"
    1. Make sure that the MySQL configuration file /etc/my.cnf on the third node (percona3) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.73\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the third node with the following command:

      [root@percona3 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all three nodes. The following is an example of status from the third node (percona3):

      mysql> show status like 'wsrep%';\n

      The output confirms that the third node has joined the cluster.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 3                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-rhel.html#testing-replication","title":"Testing replication","text":"

    To test replication, lets create a new database on second node, create a table for that database on the third node, and add some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@percona2> CREATE DATABASE percona;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Switch to a newly created database:

      mysql@percona3> USE percona;\n

      The following output confirms that a database has been changed:

      Expected output
      Database changed\n
    3. Create a table on the third node:

      mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)\n
    4. Insert records on the first node:

      mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)\n
    5. Retrieve all the rows from that table on the second node:

      mysql@percona2> SELECT * FROM percona.example;\n

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n|       1 | percona1  |\n+---------+-----------+\n1 row in set (0.00 sec)\n

      This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.

    "},{"location":"configure-cluster-rhel.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"configure-cluster-ubuntu.html","title":"Configure a cluster on Debian or Ubuntu","text":"

    This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Ubuntu 14 LTS servers, using the packages from Percona repositories.

    "},{"location":"configure-cluster-ubuntu.html#prerequisites","title":"Prerequisites","text":"

    The procedure described in this tutorial requires he following:

    "},{"location":"configure-cluster-ubuntu.html#step-1-install-pxc","title":"Step 1. Install PXC","text":"

    Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Debian or Ubuntu.

    Note

    Debian/Ubuntu installation prompts for root password. For this tutorial, set it to Passw0rd. After the packages have been installed, mysqld will start automatically. Stop mysqld on all three nodes using sudo systemctl stop mysql.

    "},{"location":"configure-cluster-ubuntu.html#step-2-configure-the-first-node","title":"Step 2. Configure the first node","text":"

    Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.

    1. Make sure that the configuration file /etc/mysql/my.cnf for the first node (pxc1) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #1 address\nwsrep_node_address=192.168.70.61\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n
    2. Start the first node with the following command:

      [root@pxc1 ~]# systemctl start mysql@bootstrap.service\n

      This command will start the first node and bootstrap the cluster.

    3. After the first node has been started, cluster status can be checked with the following command:

      mysql> show status like 'wsrep%';\n

      The following outut shows the cluste status:

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 1                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n

      This output shows that the cluster has been successfully bootstrapped.

    To perform State Snapshot Transfer using XtraBackup, set up a new user with proper privileges:

    mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';\nmysql@pxc1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';\nmysql@pxc1> FLUSH PRIVILEGES;\n

    Note

    MySQL root account can also be used for performing SST, but it is more secure to use a different (non-root) user for this.

    "},{"location":"configure-cluster-ubuntu.html#step-3-configure-the-second-node","title":"Step 3. Configure the second node","text":"
    1. Make sure that the configuration file /etc/mysql/my.cnf on the second node (pxc2) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #2 address\nwsrep_node_address=192.168.70.62\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the second node with the following command:

      [root@pxc2 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of status from the second node (pxc2):

      mysql> show status like 'wsrep%';\n

      The following output shows that the new node has been successfully added to the cluster.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 2                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-ubuntu.html#step-4-configure-the-third-node","title":"Step 4. Configure the third node","text":"
    1. Make sure that the MySQL configuration file /etc/mysql/my.cnf on the third node (pxc3) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.63\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the third node with the following command:

      [root@pxc3 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all nodes. The following is an example of status from the third node (pxc3):

    mysql> show status like 'wsrep%';\n

    The following output confirms that the third node has joined the cluster.

    Expected output
    +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 3                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-ubuntu.html#test-replication","title":"Test replication","text":"

    To test replication, lets create a new database on the second node, create a table for that database on the third node, and add some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@percona2> CREATE DATABASE percona;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Switch to a newly created database:

      mysql@percona3> USE percona;\n

      The following output confirms that a database has been changed:

      Expected output
      Database changed\n
    3. Create a table on the third node:

      mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)\n
    4. Insert records on the first node:

      mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)\n
    5. Retrieve all the rows from that table on the second node:

      mysql@percona2> SELECT * FROM percona.example;\n

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n|       1 | percona1  |\n+---------+-----------+\n1 row in set (0.00 sec)\n

      This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.

    "},{"location":"configure-cluster-ubuntu.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"configure-nodes.html","title":"Configure nodes for write-set replication","text":"

    After installing Percona XtraDB Cluster on each node, you need to configure the cluster. In this section, we will demonstrate how to configure a three node cluster:

    Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63
    1. Stop the Percona XtraDB Cluster server. After the installation completes the server is not started. You need this step if you have started the server manually.

      $ sudo service mysql stop\n
    2. Edit the configuration file of the first node to provide the cluster settings.

      If you use Debian or Ubuntu, edit /etc/mysql/mysql.conf.d/mysqld.cnf:

      wsrep_provider=/usr/lib/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n

      If you use Red Hat or CentOS, edit /etc/my.cnf. Note that on these systems you set the wsrep_provider option to a different value:

      wsrep_provider=/usr/lib64/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n
    3. Configure node 1.

      wsrep_node_name=pxc1\nwsrep_node_address=192.168.70.61\npxc_strict_mode=ENFORCING\n
    4. Set up node 2 and node 3 in the same way: Stop the server and update the configuration file applicable to your system. All settings are the same except for wsrep_node_name and wsrep_node_address.

      For node 2

      wsrep_node_name=pxc2\nwsrep_node_address=192.168.70.62\n

      For node 3

      wsrep_node_name=pxc3\nwsrep_node_address=192.168.70.63\n
    5. Set up the traffic encryption settings. Each node of the cluster must use the same SSL certificates.

      [mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n

    Important

    In Percona XtraDB Cluster 8.0, the Encrypting Replication Traffic is enabled by default (via the pxc-encrypt-cluster-traffic variable).

    The replication traffic encryption cannot be enabled on a running cluster. If it was disabled before the cluster was bootstrapped, the cluster must to stopped. Then set up the encryption, and bootstrap (see Bootstrapping the First Node) again.

    See also

    More information about the security settings in Percona XtraDB Cluster * Security Basics * Encrypting PXC Traffic * SSL Automatic Configuration

    "},{"location":"configure-nodes.html#template-of-the-configuration-file","title":"Template of the configuration file","text":"

    Here is an example of a full configuration file installed on CentOS to /etc/my.cnf.

    # Template my.cnf for PXC\n# Edit to your requirements.\n[client]\nsocket=/var/lib/mysql/mysql.sock\n[mysqld]\nserver-id=1\ndatadir=/var/lib/mysql\nsocket=/var/lib/mysql/mysql.sock\nlog-error=/var/log/mysqld.log\npid-file=/var/run/mysqld/mysqld.pid\n# Binary log expiration period is 604800 seconds, which equals 7 days\nbinlog_expire_logs_seconds=604800\n######## wsrep ###############\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n# Cluster connection URL contains IPs of nodes\n#If no IP is found, this implies that a new cluster needs to be created,\n#in order to do that you need to bootstrap this node\nwsrep_cluster_address=gcomm://\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n# Slave thread to use\nwsrep_slave_threads=8\nwsrep_log_conflicts\n# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n# Node IP address\n#wsrep_node_address=192.168.70.63\n# Cluster name\nwsrep_cluster_name=pxc-cluster\n#If wsrep_node_name is not specified,  then system hostname will be used\nwsrep_node_name=pxc-cluster-node-1\n#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER\npxc_strict_mode=ENFORCING\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    "},{"location":"configure-nodes.html#next-steps-bootstrap-the-first-node","title":"Next Steps: Bootstrap the first node","text":"

    After you configure all your nodes, initialize Percona XtraDB Cluster by bootstrapping the first node according to the procedure described in Bootstrapping the First Node.

    "},{"location":"configure-nodes.html#essential-configuration-variables","title":"Essential configuration variables","text":"

    wsrep_provider

    Specify the path to the Galera library. The location depends on the distribution:

    wsrep_cluster_name

    Specify the logical name for your cluster. It must be the same for all nodes in your cluster.

    wsrep_cluster_address

    Specify the IP addresses of nodes in your cluster. At least one is required for a node to join the cluster, but it is recommended to list addresses of all nodes. This way if the first node in the list is not available, the joining node can use other addresses.

    Note

    No addresses are required for the initial node in the cluster. However, it is recommended to specify them and properly bootstrap the first node. This will ensure that the node is able to rejoin the cluster if it goes down in the future.

    wsrep_node_name

    Specify the logical name for each individual node. If this variable is not specified, the host name will be used.

    wsrep_node_address

    Specify the IP address of this particular node.

    wsrep_sst_method

    By default, Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer. xtrabackup-v2 is the only supported option for this variable. This method requires a user for SST to be set up on the initial node.

    pxc_strict_mode

    PXC Strict Mode is enabled by default and set to ENFORCING, which blocks the use of tech preview features and unsupported features in Percona XtraDB Cluster.

    binlog_format

    Galera supports only row-level replication, so set binlog_format=ROW.

    default_storage_engine

    Galera fully supports only the InnoDB storage engine. It will not work correctly with MyISAM or any other non-transactional storage engines. Set this variable to default_storage_engine=InnoDB.

    innodb_autoinc_lock_mode

    Galera supports only interleaved (2) lock mode for InnoDB. Setting the traditional (0) or consecutive (1) lock mode can cause replication to fail due to unresolved deadlocks. Set this variable to innodb_autoinc_lock_mode=2.

    "},{"location":"configure-nodes.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"copyright-and-licensing-information.html","title":"Copyright and licensing information","text":""},{"location":"copyright-and-licensing-information.html#documentation-licensing","title":"Documentation licensing","text":"

    Percona XtraDB Cluster documentation is (C)2009-2023 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.

    "},{"location":"copyright-and-licensing-information.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"crash-recovery.html","title":"Crash recovery","text":"

    Unlike the standard MySQL replication, a PXC cluster acts like one logical entity, which controls the status and consistency of each node as well as the status of the whole cluster. This allows maintaining the data integrity more efficiently than with traditional asynchronous replication without losing safe writes on multiple nodes at the same time.

    However, there are scenarios where the database service can stop with no node being able to serve requests.

    "},{"location":"crash-recovery.html#scenario-1-node-a-is-gracefully-stopped","title":"Scenario 1: Node A is gracefully stopped","text":"

    In a three node cluster (node A, Node B, node C), one node (node A, for example) is gracefully stopped: for the purpose of maintenance, configuration change, etc.

    In this case, the other nodes receive a \u201cgood bye\u201d message from the stopped node and the cluster size is reduced; some properties like quorum calculation or auto increment are automatically changed. As soon as node A is started again, it joins the cluster based on its wsrep_cluster_address variable in my.cnf.

    If the writeset cache (gcache.size) on nodes B and/or C still has all the transactions executed while node A was down, joining is possible via IST. If IST is impossible due to missing transactions in donor\u2019s gcache, the fallback decision is made by the donor and SST is started automatically.

    "},{"location":"crash-recovery.html#scenario-2-two-nodes-are-gracefully-stopped","title":"Scenario 2: Two nodes are gracefully stopped","text":"

    Similar to Scenario 1: Node A is gracefully stopped, the cluster size is reduced to 1 \u2014 even the single remaining node C forms the primary component and is able to serve client requests. To get the nodes back into the cluster, you just need to start them.

    However, when a new node joins the cluster, node C will be switched to the \u201cDonor/Desynced\u201d state as it has to provide the state transfer at least to the first joining node. It is still possible to read/write to it during that process, but it may be much slower, which depends on how large amount of data should be sent during the state transfer. Also, some load balancers may consider the donor node as not operational and remove it from the pool. So, it is best to avoid the situation when only one node is up.

    If you restart node A and then node B, you may want to make sure note B does not use node A as the state transfer donor: node A may not have all the needed writesets in its gcache. Specify node C node as the donor in your configuration file and start the mysql service:

    $ systemctl start mysql\n

    See also

    Galera Documentation: wsrep_sst_donor option

    "},{"location":"crash-recovery.html#scenario-3-all-three-nodes-are-gracefully-stopped","title":"Scenario 3: All three nodes are gracefully stopped","text":"

    The cluster is completely stopped and the problem is to initialize it again. It is important that a PXC node writes its last executed position to the grastate.dat file.

    By comparing the seqno number in this file, you can see which is the most advanced node (most likely the last stopped). The cluster must be bootstrapped using this node, otherwise nodes that had a more advanced position will have to perform the full SST to join the cluster initialized from the less advanced one. As a result, some transactions will be lost). To bootstrap the first node, invoke the startup script like this:

    $ systemctl start mysql@bootstrap.service\n

    Note

    Even though you bootstrap from the most advanced node, the other nodes have a lower sequence number. They will still have to join via the full SST because the Galera Cache is not retained on restart.

    For this reason, it is recommended to stop writes to the cluster before its full shutdown, so that all nodes can stop at the same position. See also pc.recovery.

    "},{"location":"crash-recovery.html#scenario-4-one-node-disappears-from-the-cluster","title":"Scenario 4: One node disappears from the cluster","text":"

    This is the case when one node becomes unavailable due to power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, etc.

    Two remaining nodes notice the connection to node A is down and start trying to re-connect to it. After several timeouts, node A is removed from the cluster. The quorum is saved (2 out of 3 nodes are up), so no service disruption happens. After it is restarted, node A joins automatically (as described in Scenario 1: Node A is gracefully stopped).

    "},{"location":"crash-recovery.html#scenario-5-two-nodes-disappear-from-the-cluster","title":"Scenario 5: Two nodes disappear from the cluster","text":"

    Two nodes are not available and the remaining node (node C) is not able to form the quorum alone. The cluster has to switch to a non-primary mode, where MySQL refuses to serve any SQL queries. In this state, the mysqld process on node C is still running and can be connected to but any statement related to data fails with an error

    > SELECT * FROM test.sbtest1;\n
    The error message
    ERROR 1047 (08S01): WSREP has not yet prepared node for application use\n

    Reads are possible until node C decides that it cannot access node A and node B. New writes are forbidden.

    As soon as the other nodes become available, the cluster is formed again automatically. If node B and node C were just network-severed from node A, but they can still reach each other, they will keep functioning as they still form the quorum.

    If node A and node B crashed, you need to enable the primary component on node C manually, before you can bring up node A and node B. The command to do this is:

    > SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n

    This approach only works if the other nodes are down before doing that! Otherwise, you end up with two clusters having different data.

    See also

    Adding Nodes to Cluster

    "},{"location":"crash-recovery.html#scenario-6-all-nodes-went-down-without-a-proper-shutdown-procedure","title":"Scenario 6: All nodes went down without a proper shutdown procedure","text":"

    This scenario is possible in case of a datacenter power failure or when hitting a MySQL or Galera bug. Also, it may happen as a result of data consistency being compromised where the cluster detects that each node has different data. The grastate.dat file is not updated and does not contain a valid sequence number (seqno). It may look like this:

    $ cat /var/lib/mysql/grastate.dat\n# GALERA saved state\nversion: 2.1\nuuid: 220dcdcb-1629-11e4-add3-aec059ad3734\nseqno: -1\nsafe_to_bootstrap: 0\n

    In this case, you cannot be sure that all nodes are consistent with each other. We cannot use safe_to_bootstrap variable to determine the node that has the last transaction committed as it is set to 0 for each node. An attempt to bootstrap from such a node will fail unless you start mysqld with the --wsrep-recover parameter:

    $ mysqld --wsrep-recover\n

    Search the output for the line that reports the recovered position after the node UUID (1122 in this case):

    Expected output
    ...\n... [Note] WSREP: Recovered position: 220dcdcb-1629-11e4-add3-aec059ad3734:1122\n...\n

    The node where the recovered position is marked by the greatest number is the best bootstrap candidate. In its grastate.dat file, set the safe_to_bootstrap variable to 1. Then, bootstrap from this node.

    Note

    After a shutdown, you can boostrap from the node which is marked as safe in the grastate.dat file.

    ...\nsafe_to_bootstrap: 1\n...\n

    See also

    Galera Documentation Introducing the Safe-To-Bootstrap feature in Galera Cluster

    In recent Galera versions, the option pc.recovery (enabled by default) saves the cluster state into a file named gvwstate.dat on each member node. As the name of this option suggests (pc \u2013 primary component), it saves only a cluster being in the PRIMARY state. An example content of the file may look like this:

    cat /var/lib/mysql/gvwstate.dat\nmy_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9\n#vwbeg\nview_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3\nbootstrap: 0\nmember: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0\nmember: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0\nmember: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0\n#vwend\n

    We can see a three node cluster with all members being up. Thanks to this new feature, the nodes will try to restore the primary component once all the members start to see each other. This makes the PXC cluster automatically recover from being powered down without any manual intervention! In the logs we will see:

    "},{"location":"crash-recovery.html#scenario-7-the-cluster-loses-its-primary-state-due-to-split-brain","title":"Scenario 7: The cluster loses its primary state due to split brain","text":"

    For the purpose of this example, let\u2019s assume we have a cluster that consists of an even number of nodes: six, for example. Three of them are in one location while the other three are in another location and they lose network connectivity. It is best practice to avoid such topology: if you cannot have an odd number of real nodes, you can use an additional arbitrator (garbd) node or set a higher pc.weight to some nodes. But when the split brain happens any way, none of the separated groups can maintain the quorum: all nodes must stop serving requests and both parts of the cluster will be continuously trying to re-connect.

    If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command as described in Scenario 5: Two nodes disappear from the cluster

    > SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n

    After this, you are able to work on the manually restored part of the cluster, and the other half should be able to automatically re-join using IST as soon as the network link is restored.

    Warning

    If you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring a network link in this case will not make them re-join until the nodes are restarted and members specified in configuration file are connected again.

    Then, as the Galera replication model truly cares about data consistency: once the inconsistency is detected, nodes that cannot execute row change statement due to a data difference \u2013 an emergency shutdown will be performed and the only way to bring the nodes back to the cluster is via the full SST

    Based on material from Percona Database Performance Blog

    This article is based on the blog post Galera replication - how to recover a PXC cluster by Przemys\u0142aw Malkowski: https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

    "},{"location":"crash-recovery.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"data-at-rest-encryption.html","title":"Data at Rest Encryption","text":""},{"location":"data-at-rest-encryption.html#introduction","title":"Introduction","text":"

    Data at rest encryption refers to encrypting data stored on a disk on a server. If an unauthorized user accesses the data files from the file system, encryption ensures the user cannot read the file contents. Percona Server allows you to enable, disable, and apply encryptions to the following objects:

    The transit data is defined as data that is transmitted to another node or client. Encrypted transit data uses an SSL connection.

    Percona XtraDB Cluster 8.0 supports all data at rest generally-available encryption features available from Percona Server for MySQL 8.0.

    "},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_file-plugin","title":"Configure PXC to use keyring_file plugin","text":""},{"location":"data-at-rest-encryption.html#configuration","title":"Configuration","text":"

    Percona XtraDB Cluster inherits the Percona Server for MySQL behavior to configure the keyring_file plugin. The following example illustrates using the plugin. Review Use the kerying component or keyring plugin for the latest information on the keyring component and plugin.

    Note

    The keyring_file plugin should not be used for regulatory compliance.

    Install the plugin and add the following options in the configuration file:

    [mysqld]\nearly-plugin-load=keyring_file.so\nkeyring_file_data=<PATH>/keyring\n

    The SHOW PLUGINS statement checks if the plugin has been successfully loaded.

    Note

    PXC recommends the same configuration on all cluster nodes, and all nodes should have the keyring configured. A mismatch in the keyring configuration does not allow the JOINER node to join the cluster.

    If the user has a bootstrapped node with keyring enabled, then upcoming cluster nodes inherit the keyring (the encrypted key) from the DONOR node.

    "},{"location":"data-at-rest-encryption.html#usage","title":"Usage","text":"

    XtraBackup re-encrypts the data using a transition-key and the JOINER node re-encrypts it using a newly generated master-key.

    Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible, as in higher version JOINER can join from lower version DONOR, but not vice-versa.

    Percona XtraDB Cluster does not allow the combination of nodes with encryption and nodes without encryption to maintain data consistency. For example, the user creates node-1 with encryption (keyring) enabled and node-2 with encryption (keyring) disabled. If the user attempts to create a table with encryption on node-1, the creation fails on node-2, causing data inconsistency. A node fails to start if it fails to load the keyring plugin.

    Note

    If the user does not specify the keyring parameters, the node does not know that it must load the keyring. The JOINER node may start, but it eventually shuts down when the DML level inconsistency with encrypted tablespace is detected.

    If a node does not have an encrypted tablespace, the keyring is not generated, and the keyring file is empty. Creating an encrypted table on the node generates the keyring.

    In an operation that is local to the node, you can rotate the key as needed. The ALTER INSTANCE ROTATE INNODB MASTER KEY statement is not replicated on cluster.

    The JOINER node generates its keyring.

    "},{"location":"data-at-rest-encryption.html#compatibility","title":"Compatibility","text":"

    Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible. A higher version JOINER can join from lower version DONOR, but not vice-versa.

    "},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_vault-plugin","title":"Configure PXC to use keyring_vault plugin","text":""},{"location":"data-at-rest-encryption.html#keyring_vault","title":"keyring_vault","text":"

    The keyring_vault plugin allows storing the master-key in vault-server (vs. local file as in case of keyring_file).

    Warning

    The rsync tool does not support the keyring_vault. Any rysnc-SST on a joiner is aborted if the keyring_vault is configured.

    "},{"location":"data-at-rest-encryption.html#configuration_1","title":"Configuration","text":"

    Configuration options are the same as upstream. The my.cnf configuration file should contain the following options:

    [mysqld]\nearly-plugin-load=\"keyring_vault=keyring_vault.so\"\nkeyring_vault_config=\"<PATH>/keyring_vault_n1.conf\"\n

    Also, keyring_vault_n1.conf file should contain the following:

    vault_url = http://127.0.0.1:8200\nsecret_mount_point = secret1\ntoken = e0345eb4-35dd-3ddd-3b1e-e42bb9f2525d\nvault_ca = /data/keyring_vault_confs/vault_ca.crt\n

    The detailed description of these options can be found in the upstream documentation.

    Vault-server is an external server, so make sure the PXC node can reach the server.

    Note

    Percona XtraDB Cluster recommends using the same keyring_plugin type on all cluster nodes. Mixing the keyring plugin types is recommended only while transitioning from keyring_file -> keyring_vault or vice-versa.

    All nodes do not need to refer to the same vault server. Whatever vault server is used, it must be accessible from the respective node. All nodes do not need to use the same mount point.

    If the node is not able to reach or connect to the vault server, an error is notified during the server boot, and the node refuses to start:

    The warning message
    2018-05-29T03:54:33.859613Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:54:33.977145Z 0 [ERROR] Plugin keyring_vault reported:\n'CURL returned this error code: 7 with error message : Failed to connect\nto 127.0.0.1 port 8200: Connection refused'\n

    If some nodes of the cluster are unable to connect to vault-server, this relates only to these specific nodes: e.g., if node-1 can connect, and node-2 cannot connect, only node-2 refuses to start. Also, if the server has a pre-existing encrypted object and on reboot, the server fails to connect to the vault-server, the object is not accessible.

    In case when vault-server is accessible, but authentication credential is incorrect, the consequences are the same, and the corresponding error looks like the following:

    The warning message
    2018-05-29T03:58:54.461911Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:58:54.577477Z 0 [ERROR] Plugin keyring_vault reported:\n'Could not retrieve list of keys from Vault. Vault has returned the\nfollowing error(s): [\"permission denied\"]'\n

    In case of an accessible vault-server with the wrong mount point, there is no error during server boot, but the node still refuses to start:

    mysql> CREATE TABLE t1 (c1 INT, PRIMARY KEY pk(c1)) ENCRYPTION='Y';\n
    Expected output
    ERROR 3185 (HY000): Can't find master key from keyring, please check keyring\nplugin is loaded.\n\n... [ERROR] Plugin keyring_vault reported: 'Could not write key to Vault. ...\n... [ERROR] Plugin keyring_vault reported: 'Could not flush keys to keyring'\n
    "},{"location":"data-at-rest-encryption.html#mix-keyring-plugin-types","title":"Mix keyring plugin types","text":"

    With XtraBackup introducing transition-key logic, it is now possible to mix and match keyring plugins. For example, the user has node-1 configured to use the keyring_file plugin and node-2 configured to use keyring_vault.

    Note

    Percona recommends the same configuration for all the nodes of the cluster. A mix and match in keyring plugin types is recommended only during the transition from one keying type to another.

    "},{"location":"data-at-rest-encryption.html#temporary-file-encryption","title":"Temporary file encryption","text":""},{"location":"data-at-rest-encryption.html#migrate-keys-between-keyring-keystores","title":"Migrate keys between keyring keystores","text":"

    Percona XtraDB Cluster supports key migration between keystores. The migration can be performed offline or online.

    "},{"location":"data-at-rest-encryption.html#offline-migration","title":"Offline migration","text":"

    In offline migration, the node to migrate is shut down, and the migration server takes care of migrating keys for the said server to a new keystore.

    For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file. To migrate the n2 node to use keyring_vault, use the following procedure:

    1. Shut down the n2 node.

    2. Start the Migration Server (mysqld with a special option).

    3. The Migration Server copies the keys from the n2 keyring file and adds them to the vault server.

    4. Start the n2 node with the vault parameter, and the keys are available.

    Here is how the migration server output should look like:

    Expected output
    /dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node2/keyring \\\n--keyring-migration-destination=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/vault/keyring_vault.cnf &\n\n... [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use\n    --explicit_defaults_for_timestamp server option (see documentation for more details).\n... [Note] --secure-file-priv is set to NULL. Operations related to importing and\n    exporting data are disabled\n... [Warning] WSREP: Node is not a cluster node. Disabling pxc_strict_mode\n... [Note] /dev/shm/pxc80/bin/mysqld (mysqld 8.0-debug) starting as process 5710 ...\n... [Note] Keyring migration successful.\n

    On a successful migration, the destination keystore receives additional migrated keys (pre-existing keys in the destination keystore are not touched or removed). The source keystore retains the keys as the migration performs a copy operation and not a move operation.

    If the migration fails, the destination keystore is unchanged.

    "},{"location":"data-at-rest-encryption.html#online-migration","title":"Online migration","text":"

    In online migration, the node to migrate is kept running, and the migration server takes care of migrating keys for the said server to a new keystore by connecting to the node.

    For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file. Migrate the n3 node to use keyring_vault using the following procedure:

    1. Start the Migration Server (mysqld with a special option).

    2. The Migration Server copies the keys from the n3 keyring file and adds them to the vault server.

    3. Restart the n3 node with the vault parameter, and the keys are available.

    /dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/keyring_vault3.cnf \\\n--keyring-migration-destination=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node3/keyring \\\n--keyring-migration-host=localhost \\\n--keyring-migration-user=root \\\n--keyring-migration-port=16300 \\\n--keyring-migration-password='' &\n

    On a successful migration, the destination keystore receives the additional migrated keys. Any pre-existing keys in the destination keystore are unchanged. The source keystore retains the keys as the migration performs a copy operation and not a move operation.

    If the migration fails, the destination keystore is not changed.

    "},{"location":"data-at-rest-encryption.html#migration-server-options","title":"Migration server options","text":"

    Prerequisite for migration:

    Make sure to pass required keyring options and other configuration parameters for the two keyring plugins. For example, if keyring_file is one of the plugins, you must explicitly configure the keyring_file_data system variable in the my.cnf file.

    Other non-keyring options may be required as well. One way to specify these options is by using --defaults-file to name an option file that contains the required options.

    [mysqld]\nbasedir=/dev/shm/pxc80\ndatadir=/dev/shm/pxc80/copy_mig\nlog-error=/dev/shm/pxc80/logs/copy_mig.err\nsocket=/tmp/copy_mig.sock\nport=16400\n

    See also

    Encrypt traffic documentation

    Percona Server for MySQL Documentation: Data-at-Rest Encryption https://www.percona.com/doc/percona-server/8.0/security/data-at-rest-encryption.html#data-at-rest-encryption

    "},{"location":"data-at-rest-encryption.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"docker.html","title":"Running Percona XtraDB Cluster in a Docker Container","text":"

    Docker images of Percona XtraDB Cluster are hosted publicly on Docker Hub at https://hub.docker.com/r/percona/percona-xtradb-cluster/.

    For more information about using Docker, see the Docker Docs. Make sure that you are using the latest version of Docker. The ones provided via apt and yum may be outdated and cause errors.

    We gather Telemetry data in the Percona packages and Docker images.

    Note

    By default, Docker pulls the image from Docker Hub if the image is not available locally.

    The image contains only the most essential binaries for Percona XtraDB Cluster to run. Some utilities included in a Percona Server for MySQL or MySQL installation might be missing from the Percona XtraDB Cluster Docker image.

    The following procedure describes how to set up a simple 3-node cluster for evaluation and testing purposes. Do not use these instructions in a production environment because the MySQL certificates generated in this procedure are self-signed. For a production environment, you should generate and store the certificates to be used by Docker.

    In this procedure, all of the nodes run Percona XtraDB Cluster 8.0 in separate containers on one host:

    1. Create a ~/pxc-docker-test/config directory.

    2. Create a custom.cnf file with the following contents, and place the file in the new directory:

      [mysqld]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n\n[client]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/client-cert.pem\nssl-key = /cert/client-key.pem\n\n[sst]\nencrypt = 4\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n
    3. Create a cert directory and generate self-signed SSL certificates on the host node:

      $ mkdir -m 777 -p ~/pxc-docker-test/cert\ndocker run --name pxc-cert --rm -v ~/pxc-docker-test/cert:/cert\npercona/percona-xtradb-cluster:8.0 mysql_ssl_rsa_setup -d /cert\n
    4. Create a Docker network:

      $ docker network create pxc-network\n
    5. Bootstrap the cluster (create the first node):

      $ docker run -d \\\n  -e MYSQL_ROOT_PASSWORD=test1234# \\\n  -e CLUSTER_NAME=pxc-cluster1 \\\n  --name=pxc-node1 \\\n  --net=pxc-network \\\n  -v ~/pxc-docker-test/cert:/cert \\\n  -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n  percona/percona-xtradb-cluster:8.0\n
    6. Join the second node:

      $ docker run -d \\\n  -e MYSQL_ROOT_PASSWORD=test1234# \\\n  -e CLUSTER_NAME=pxc-cluster1 \\\n  -e CLUSTER_JOIN=pxc-node1 \\\n  --name=pxc-node2 \\\n  --net=pxc-network \\\n  -v ~/pxc-docker-test/cert:/cert \\\n  -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n  percona/percona-xtradb-cluster:8.0\n
    7. Join the third node:

      $ docker run -d \\\n  -e MYSQL_ROOT_PASSWORD=test1234# \\\n  -e CLUSTER_NAME=pxc-cluster1 \\\n  -e CLUSTER_JOIN=pxc-node1 \\\n  --name=pxc-node3 \\\n  --net=pxc-network \\\n  -v ~/pxc-docker-test/cert:/cert \\\n  -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n  percona/percona-xtradb-cluster:8.0\n

    To verify the cluster is available, do the following:

    1. Access the MySQL client. For example, on the first node:

      $ sudo docker exec -it pxc-node1 /usr/bin/mysql -uroot -ptest1234#\n
      Expected output
      mysql: [Warning] Using a password on the command line interface can be insecure.\nWelcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 12\n...\nYou are enforcing ssl connection via unix socket. Please consider\nswitching ssl off as it does not make connection via unix socket\nany more secure\n\nmysql>\n
    2. View the wsrep status variables:

      mysql> show status like 'wsrep%';\n
      Expected output
      +------------------------------+-------------------------------------------------+\n| Variable_name                | Value                                           |\n+------------------------------+-------------------------------------------------+\n| wsrep_local_state_uuid       | 625318e2-9e1c-11e7-9d07-aee70d98d8ac            |\n...\n| wsrep_local_state_comment    | Synced                                          |\n...\n| wsrep_incoming_addresses     | 172.18.0.2:3306,172.18.0.3:3306,172.18.0.4:3306 |\n...\n| wsrep_cluster_conf_id        | 3                                               |\n| wsrep_cluster_size           | 3                                               |\n| wsrep_cluster_state_uuid     | 625318e2-9e1c-11e7-9d07-aee70d98d8ac            |\n| wsrep_cluster_status         | Primary                                         |\n| wsrep_connected              | ON                                              |\n...\n| wsrep_ready                  | ON                                              |\n+------------------------------+-------------------------------------------------+\n59 rows in set (0.02 sec)\n
    "},{"location":"docker.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"encrypt-traffic.html","title":"Encrypt PXC traffic","text":"

    There are two kinds of traffic in Percona XtraDB Cluster:

    1. Client-server traffic (the one between client applications and cluster nodes),

    2. Replication traffic, that includes SST, IST, write-set replication, and various service messages.

    Percona XtraDB Cluster supports encryption for all types of traffic. Replication traffic encryption can be configured either automatically or manually.

    "},{"location":"encrypt-traffic.html#encrypt-client-server-communication","title":"Encrypt client-server communication","text":"

    Percona XtraDB Cluster uses the underlying MySQL encryption mechanism to secure communication between client applications and cluster nodes.

    MySQL generates default key and certificate files and places them in the data directory. You can override auto-generated files with manually created ones, as described in the section Generate keys and certificates manually.

    The auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes.

    Specify the following settings in the my.cnf configuration file for each node:

    [mysqld]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n\n[client]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/client-cert.pem\nssl-key=/etc/mysql/certs/client-key.pem\n

    After it is restarted, the node uses these files to encrypt communication with clients. MySQL clients require only the second part of the configuration to communicate with cluster nodes.

    MySQL generates the default key and certificate files and places them in the data directory. You can either use them or generate new certificates. For generation of new certificate please refer to Generate keys and certificates manually section.

    "},{"location":"encrypt-traffic.html#encrypt-replication-traffic","title":"Encrypt replication traffic","text":"

    Replication traffic refers to the inter-node traffic which includes the SST traffic, IST traffic, and replication traffic.

    The traffic of each type is transferred via a different channel, and so it is important to configure secure channels for all 3 variants to completely secure the replication traffic.

    Percona XtraDB Cluster supports a single configuration option which helps to secure the complete replication traffic, and is often referred to as SSL automatic configuration. You can also configure the security of each channel by specifying independent parameters.

    "},{"location":"encrypt-traffic.html#ssl-automatic-configuration","title":"SSL automatic configuration","text":"

    The automatic configuration of the SSL encryption needs a key and certificate files. MySQL generates a default key and certificate files and places them in the data directory.

    Important

    It is important that your cluster use the same SSL certificates on all nodes.

    "},{"location":"encrypt-traffic.html#enable-pxc-encrypt-cluster-traffic","title":"Enable pxc-encrypt-cluster-traffic","text":"

    Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic variable that enables automatic configuration of SSL encryption there-by encrypting SST, IST, and replication traffic.

    By default, pxc-encrypt-cluster-traffic is enabled thereby using a secured channel for replication. This variable is not dynamic and so it cannot be changed at runtime.

    Enabled, pxc-encrypt-cluster-traffic has the effect of applying the following settings: encrypt, ssl_key, ssl-ca, ssl-cert.

    Setting pxc-encrypt-cluster-traffic=ON has the effect of applying the following settings in the my.cnf configuration file:

    [mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n

    For wsrep_provider_options, only the mentioned options are affected (socket.ssl_key, socket,ssl_cert, and socket.ssl_ca), the rest is not modified.

    Important

    Disabling pxc-encrypt-cluster-traffic

    The default value of pxc-encrypt-cluster-traffic helps improve the security of your system.

    When pxc-encrypt-cluster-traffic is not enabled, anyone with the access to your network can connect to any PXC node either as a client or as another node joining the cluster. This potentially lets them query your data or get a complete copy of it.

    If you must disable pxc-encrypt-cluster-traffic, you need to stop the cluster and update [mysqld] section of the configuration file: pxc-encrypt-cluster-traffic=OFF of each node. Then, restart the cluster.

    The automatic configuration of the SSL encryption needs key and certificate files. MySQL generates default key and certificate files and places them in data directory. These auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes. Also you can override auto-generated files with manually created ones, as covered in Generate keys and certificates manually.

    The necessary key and certificate files are first searched at the ssl-ca, ssl-cert, and ssl-key options under [mysqld]. If these options are not set, the data directory is searched for ca.pem, server-cert.pem, and server-key.pem files.

    Note

    The [sst] section is not searched.

    If all three files are found, they are used to configure encryption. If any of the files is missing, a fatal error is generated.

    "},{"location":"encrypt-traffic.html#ssl-manual-configuration","title":"SSL manual configuration","text":"

    If user wants to enable encryption for specific channel only or use different certificates or other mix-match, then user can opt for manual configuration. This helps to provide more flexibility to end-users.

    To enable encryption manually, the location of the required key and certificate files shoud be specified in the Percona XtraDB Cluster configuration. If you do not have the necessary files, see Generate keys and certificates manually.

    Note

    Encryption settings are not dynamic. To enable it on a running cluster, you need to restart the entire cluster.

    There are three aspects of Percona XtraDB Cluster operation, where you can enable encryption:

    "},{"location":"encrypt-traffic.html#encrypt-sst-traffic","title":"Encrypt SST traffic","text":"

    This refers to full data transfer that usually occurs when a new node (JOINER) joins the cluster and receives data from an existing node (DONOR).

    For more information, see State snapshot transfer.

    Note

    If keyring_file plugin is used, then SST encryption is mandatory: when copying encrypted data via SST, the keyring must be sent over with the files for decryption. In this case following options are to be set in my.cnf on all nodes:

    early-plugin-load=keyring_file.so\nkeyring-file-data=/path/to/keyring/file\n

    The cluster will not work if keyring configuration across nodes is different.

    The only available SST method is xtrabackup-v2 which uses Percona XtraBackup.

    "},{"location":"encrypt-traffic.html#xtrabackup","title":"xtrabackup","text":"

    This is the only available SST method (the wsrep_sst_method is always set to xtrabackup-v2), which uses Percona XtraBackup to perform non-blocking transfer of files. For more information, see Percona XtraBackup SST Configuration.

    Encryption mode for this method is selected using the encrypt option:

    To enable encryption for SST using XtraBackup, specify the location of the keys and certificate files in the each node\u2019s configuration under [sst]:

    [sst]\nencrypt=4\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n

    Note

    SSL clients require DH parameters to be at least 1024 bits, due to the logjam vulnerability. However, versions of socat earlier than 1.7.3 use 512-bit parameters. If a dhparams.pem file of required length is not found during SST in the data directory, it is generated with 2048 bits, which can take several minutes. To avoid this delay, create the dhparams.pem file manually and place it in the data directory before joining the node to the cluster:

    $ openssl dhparam -out /path/to/datadir/dhparams.pem 2048\n

    For more information, see this blog post.

    "},{"location":"encrypt-traffic.html#encrypt-replicationist-traffic","title":"Encrypt replication/IST traffic","text":"

    Replication traffic refers to the following:

    All this traffic is transferred via the same underlying communication channel (gcomm). Securing this channel will ensure that IST traffic, write-set replication, and service messages are encrypted. (For IST, a separate channel is configured using the same configuration parameters, so 2 sections are described together).

    To enable encryption for all these processes, define the paths to the key, certificate and certificate authority files using the following wsrep provider options:

    To set these options, use the wsrep_provider_options variable in the configuration file:

    $ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/ca.pem;socket.ssl_cert=/etc/mysql/certs/server-cert.pem;socket.ssl_key=/etc/mysql/certs/server-key.pem\"\n

    Note

    You must use the same key and certificate files on all nodes, preferably those used for Encrypt client-server communication.

    Check upgrade-certificate section on how to upgrade existing certificates.

    "},{"location":"encrypt-traffic.html#generate-keys-and-certificates-manually","title":"Generate keys and certificates manually","text":"

    As mentioned above, MySQL generates default key and certificate files and places them in the data directory. If you want to override these certificates, the following new sets of files can be generated:

    These files should be generated using OpenSSL.

    Note

    The Common Name value used for the server and client keys and certificates must differ from that value used for the CA certificate.

    Generate CA key and certificateGenerate server key and certificateGenerate client key and certificate

    The Certificate Authority is used to verify the signature on certificates.

    1. Generate the CA key file:

      $ openssl genrsa 2048 > ca-key.pem\n
    2. Generate the CA certificate file:

      $ openssl req -new -x509 -nodes -days 3600\n    -key ca-key.pem -out ca.pem\n
    1. Generate the server key file:

      $ openssl req -newkey rsa:2048 -days 3600 \\\n    -nodes -keyout server-key.pem -out server-req.pem\n
    2. Remove the passphrase:

      $ openssl rsa -in server-key.pem -out server-key.pem\n
    3. Generate the server certificate file:

      $ openssl x509 -req -in server-req.pem -days 3600 \\\n    -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n    -out server-cert.pem\n
    1. Generate the client key file:

      $ openssl req -newkey rsa:2048 -days 3600 \\\n    -nodes -keyout client-key.pem -out client-req.pem\n
    2. Remove the passphrase:

      $ openssl rsa -in client-key.pem -out client-key.pem\n
    3. Generate the client certificate file:

      $ openssl x509 -req -in client-req.pem -days 3600 \\\n   -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n   -out client-cert.pem\n
    "},{"location":"encrypt-traffic.html#verify-certificates","title":"Verify certificates","text":"

    To verify that the server and client certificates are correctly signed by the CA certificate, run the following command:

    $ openssl verify -CAfile ca.pem server-cert.pem client-cert.pem\n

    If the verification is successful, you should see the following output:

    server-cert.pem: OK\nclient-cert.pem: OK\n
    "},{"location":"encrypt-traffic.html#failed-validation-caused-by-matching-cn","title":"Failed validation caused by matching CN","text":"

    Sometimes, an SSL configuration may fail if the certificate and the CA files contain the same .

    To check if this is the case run openssl command as follows and verify that the CN field differs for the Subject and Issuer lines.

    $ openssl x509 -in server-cert.pem -text -noout\n

    Incorrect values

    Certificate:\nData:\nVersion: 1 (0x0)\nSerial Number: 1 (0x1)\nSignature Algorithm: sha256WithRSAEncryption\nIssuer: CN=www.percona.com, O=Database Performance., C=US\n...\nSubject: CN=www.percona.com, O=Database Performance., C=AU\n...\n

    To obtain a more compact output run openssl specifying -subject and -issuer parameters:

    $ openssl x509 -in server-cert.pem -subject -issuer -noout\n
    Expected output
    subject= /CN=www.percona.com/O=Database Performance./C=AU\nissuer= /CN=www.percona.com/O=Database Performance./C=US\n
    "},{"location":"encrypt-traffic.html#deploy-keys-and-certificates","title":"Deploy keys and certificates","text":"

    Use a secure method (for example, scp or sftp) to send the key and certificate files to each node. Place them under the /etc/mysql/certs/ directory or similar location where you can find them later.

    Note

    Make sure that this directory is protected with proper permissions. Most likely, you only want to give read permissions to the user running mysqld.

    The following files are required:

    This file is used to verify signatures.

    These files are used to secure database server activity and write-set replication traffic.

    These files are required only if the node should act as a MySQL client. For example, if you are planning to perform SST using mysqldump.

    Note

    Upgrade certificates subsection covers the details on upgrading certificates, if necessary.

    "},{"location":"encrypt-traffic.html#upgrade-certificates","title":"Upgrade certificates","text":"

    The following procedure shows how to upgrade certificates used for securing replication traffic when there are two nodes in the cluster.

    1. Restart the first node with the socket.ssl_ca option set to a combination of the the old and new certificates in a single file.

      For example, you can merge contents of old-ca.pem and new-ca.pem into upgrade-ca.pem as follows:

      $ cat old-ca.pem > upgrade-ca.pem && \\\ncat new-ca.pem >> upgrade-ca.pem\n

      Set the wsrep_provider_options variable as follows:

      $ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/upgrade-ca.pem;socket.ssl_cert=/etc/mysql/certs/old-cert.pem;socket.ssl_key=/etc/mysql/certs/old-key.pem\"\n
    2. Restart the second node with the socket.ssl_ca, socket.ssl_cert, and socket.ssl_key options set to the corresponding new certificate files.

      $ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/new-ca.pem;socket.ssl_cert=/etc/mysql/certs/new-cert.pem;socket.ssl_key=/etc/mysql/certs/new-key.pem\"\n
    3. Restart the first node with the new certificate files, as in the previous step.

    4. You can remove the old certificate files.

    "},{"location":"encrypt-traffic.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"failover.html","title":"Cluster failover","text":"

    Cluster membership is determined simply by which nodes are connected to the rest of the cluster; there is no configuration setting explicitly defining the list of all possible cluster nodes. Therefore, every time a node joins the cluster, the total size of the cluster is increased and when a node leaves (gracefully) the size is decreased.

    The size of the cluster is used to determine the required votes to achieve quorum. A quorum vote is done when a node or nodes are suspected to no longer be part of the cluster (they do not respond). This no response timeout is the evs.suspect_timeout setting in the wsrep_provider_options (default 5 sec), and when a node goes down ungracefully, write operations will be blocked on the cluster for slightly longer than that timeout.

    Once a node (or nodes) is determined to be disconnected, then the remaining nodes cast a quorum vote, and if the majority of nodes from before the disconnect are still still connected, then that partition remains up. In the case of a network partition, some nodes will be alive and active on each side of the network disconnect. In this case, only the quorum will continue. The partition(s) without quorum will change to non-primary state.

    As a consequence, it\u2019s not possible to have safe automatic failover in a 2 node cluster, because failure of one node will cause the remaining node to become non-primary. Moreover, any cluster with an even number of nodes (say two nodes in two different switches) have some possibility of a split brain situation, when neither partition is able to retain quorum if connection between them is lost, and so they both become non-primary.

    Therefore, for automatic failover, the rule of 3s is recommended. It applies at various levels of your infrastructure, depending on how far the cluster is spread out to avoid single points of failure. For example:

    These rules will prevent split brain situations and ensure automatic failover works correctly.

    "},{"location":"failover.html#use-an-arbitrator","title":"Use an arbitrator","text":"

    If it is too expensive to add a third node, switch, network, or datacenter, you should use an arbitrator. An arbitrator is a voting member of the cluster that can receive and relay replication, but it does not persist any data, and runs its own daemon instead of mysqld. Placing even a single arbitrator in a 3rd location can add split brain protection to a cluster that is spread across only two nodes/locations.

    "},{"location":"failover.html#recover-a-non-primary-cluster","title":"Recover a non-primary cluster","text":"

    It is important to note that the rule of 3s applies only to automatic failover. In the event of a 2-node cluster (or in the event of some other outage that leaves a minority of nodes active), the failure of one node will cause the other to become non-primary and refuse operations. However, you can recover the node from non-primary state using the following command:

    SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n

    This will tell the node (and all nodes still connected to its partition) that it can become a primary cluster. However, this is only safe to do when you are sure there is no other partition operating in primary as well, or else Percona XtraDB Cluster will allow those two partitions to diverge (and you will end up with two databases that are impossible to re-merge automatically).

    For example, assume there are two data centers, where one is primary and one is for disaster recovery, with an even number of nodes in each. When an extra arbitrator node is run only in the primary data center, the following high availability features will be available:

    "},{"location":"failover.html#other-reading","title":"Other reading","text":""},{"location":"failover.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"faq.html","title":"Frequently asked questions","text":""},{"location":"faq.html#how-do-i-report-bugs","title":"How do I report bugs?","text":"

    All bugs can be reported on JIRA. Please submit error.log files from all the nodes.

    "},{"location":"faq.html#how-do-i-solve-locking-issues-like-auto-increment","title":"How do I solve locking issues like auto-increment?","text":"

    For auto-increment,\u00a0Percona XtraDB Cluster changes auto_increment_offset for each new node. In a single-node workload, locking is handled in the same way as InnoDB. In case of write load on several nodes, Percona XtraDB Cluster uses optimistic locking and the application may receive lock error in response to COMMIT query.

    "},{"location":"faq.html#what-if-a-node-crashes-and-innodb-recovery-rolls-back-some-transactions","title":"What if a node crashes and InnoDB recovery rolls back some transactions?","text":"

    When a node crashes, after restarting, it will copy the whole dataset from another\u00a0node (if there were changes to data since the crash).

    "},{"location":"faq.html#how-can-i-check-the-galera-node-health","title":"How can I check the Galera node health?","text":"

    To check the health of a Galera node, use the following query:

    mysql> SELECT 1 FROM dual;\n

    The following results of the previous query are possible:

    You can also check a node\u2019s health with the clustercheck script. First set up the clustercheck user:

    mysql> CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD\n'*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> GRANT PROCESS ON *.* TO 'clustercheck'@'localhost';\n

    You can then check a node\u2019s health by running the clustercheck script:

    $ /usr/bin/clustercheck clustercheck password 0\n

    If the node is running, you should get the following status:

    HTTP/1.1 200 OK\nContent-Type: text/plain\nConnection: close\nContent-Length: 40\n\nPercona XtraDB Cluster Node is synced.\n

    In case node isn\u2019t synced or if it is offline, status will look like:

    HTTP/1.1 503 Service Unavailable\nContent-Type: text/plain\nConnection: close\nContent-Length: 44\n\nPercona XtraDB Cluster Node is not synced.\n

    Note

    The clustercheck script has the following syntax:

    <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>

    Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local

    Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local

    "},{"location":"faq.html#how-does-percona-xtradb-cluster-handle-big-transactions","title":"How does Percona XtraDB Cluster handle big transactions?","text":"

    Percona XtraDB Cluster populates write set in memory before replication, and this sets the limit for the size of transactions that make sense. There are wsrep variables for maximum row count and maximum size of write set to make sure that the server does not run out of memory.

    "},{"location":"faq.html#is-it-possible-to-have-different-table-structures-on-the-nodes","title":"Is it possible to have different table structures on the nodes?","text":"

    For example, if there are four nodes, with four tables: sessions_a, sessions_b, sessions_c, and sessions_d, and you want each table in a separate node, this is not possible for InnoDB tables. However, it will work for MEMORY tables.

    "},{"location":"faq.html#what-if-a-node-fails-or-there-is-a-network-issue-between-nodes","title":"What if a node fails or there is a network issue between nodes?","text":"

    The quorum mechanism in\u00a0Percona XtraDB Cluster will decide which nodes can accept traffic and will shut down the nodes that do not belong to the quorum. Later when the failure is fixed, the nodes will need to copy data from the working cluster.

    The algorithm for quorum is Dynamic Linear Voting (DLV). The quorum is preserved if (and only if) the sum weight of the nodes in a new component strictly exceeds half that of the preceding Primary Component, minus the nodes which left gracefully.

    The mechanism is described in detail in Galera documentation.

    "},{"location":"faq.html#how-would-the-quorum-mechanism-handle-split-brain","title":"How would the quorum mechanism handle split brain?","text":"

    The quorum mechanism cannot handle split brain. If there is no way to decide on the primary component, Percona XtraDB Cluster has no way to resolve a split brain. The minimal recommendation is to have 3 nodes. However, it is possibile to allow a node to handle traffic with the following option:

    wsrep_provider_options=\"pc.ignore_sb = yes\"\n
    "},{"location":"faq.html#why-a-node-stops-accepting-commands-if-the-other-one-fails-in-a-2-node-setup","title":"Why a node stops accepting commands if the other one fails in a 2-node setup?","text":"

    This is expected behavior to prevent split brain. For more information, see previous question or Galera documentation.

    "},{"location":"faq.html#is-it-possible-to-set-up-a-cluster-without-state-transfer","title":"Is it possible to set up a cluster without state transfer?","text":"

    It is possible in two ways:

    1. By default, Galera reads starting position from a text file <datadir>/grastate.dat. Make this file identical on all nodes, and there will be no state transfer after starting a node.

    2. Use the wsrep_start_position variable to start the nodes with the same UUID:seqno value.

    "},{"location":"faq.html#what-tcp-ports-are-used-by-percona-xtradb-cluster","title":"What TCP ports are used by Percona XtraDB Cluster?","text":"

    You may need to open up to four ports if you are using a firewall:

    1. Regular MySQL port (default is 3306).

    2. Port for group communication (default is 4567). It can be changed using the following option:

      wsrep_provider_options =\"gmcast.listen_addr=tcp://0.0.0.0:4010; \"\n
    3. Port for State Snaphot Transfer (default is 4444). It can be changed using the following option:

      wsrep_sst_receive_address=10.11.12.205:5555\n
    4. Port for Incremental State Transfer (default is port for group communication + 1 or 4568). It can be changed using the following option:

      wsrep_provider_options = \"ist.recv_addr=10.11.12.206:7777; \"\n
    "},{"location":"faq.html#is-there-async-mode-or-only-sync-commits-are-supported","title":"Is there \u201casync\u201d mode or only \u201csync\u201d commits are supported?","text":"

    Percona XtraDB Cluster does not support \u201casync\u201d mode, all commits are synchronous on all nodes. To be precise, the commits are \u201cvirtually\u201d synchronous, which means that the transaction should pass certification on nodes, not physical commit. Certification means a guarantee that the transaction does not have conflicts with other transactions on the corresponding node.

    "},{"location":"faq.html#does-it-work-with-regular-mysql-replication","title":"Does it work with regular MySQL replication?","text":"

    Yes. On the node you are going to use as source, you should enable log-bin and log-slave-update options.

    "},{"location":"faq.html#why-the-init-script-etcinitdmysql-does-not-start","title":"Why the init script (/etc/init.d/mysql) does not start?","text":"

    Try to disable SELinux with the following command:

    $ echo 0 > /selinux/enforce\n
    "},{"location":"faq.html#what-does-nc-invalid-option-d-in-the-ssterr-log-file-mean","title":"What does \u201cnc: invalid option \u2013 \u2018d\u2019\u201d in the sst.err log file mean?","text":"

    This error is specific to Debian and Ubuntu. Percona XtraDB Cluster uses netcat-openbsd package. This dependency has been fixed. Future releases of Percona XtraDB Cluster will be compatible with any netcat (see bug PXC-941).

    "},{"location":"faq.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"garbd-howto.html","title":"Set up Galera arbitrator","text":"

    The size of a cluster increases when a node joins the cluster and decreases when a node leaves. A cluster reacts to replication problems with inconsistency voting. The size of the cluster determines the required votes to achieve a quorum. If a node no longer responds and is disconnected from the cluster the remaining nodes vote. The majority of the nodes that vote are considered to be in the cluster.

    The arbitrator is important if you have an even number of nodes remaining in the cluster. The arbitrator keeps the number of nodes as an odd number, which avoids the split-brain situation.

    A Galera Arbitrator is a lightweight member of a Percona XtraDB Cluster. This member can vote but does not do any replication and is not included in flow control calculations. The Galera Arbitrator is a separate daemon called garbd. You can start this daemon separately from the cluster and run this daemon either as a service or from the shell. You cannot configure this daemon using the my.cnf file.

    Note

    For more information on how to set up a cluster you can read in the Configuring Percona XtraDB Cluster on Ubuntu or Configuring Percona XtraDB Cluster on CentOS manuals.

    "},{"location":"garbd-howto.html#installation","title":"Installation","text":"

    Galera Arbitrator does not need a dedicated server and can be installed on a machine running other applications. The server must have good network connectivity.

    Galera Arbitrator can be installed from Percona\u2019s repository on Debian/Ubuntu distributions with the following command:

    root@ubuntu:~# apt install percona-xtradb-cluster-garbd\n

    Galera Arbitrator can be installed from Percona\u2019s repository on RedHat or derivative distributions with the following command:

    [root@centos ~]# yum install percona-xtradb-cluster-garbd\n
    "},{"location":"garbd-howto.html#start-garbd-and-configuration","title":"Start garbd and configuration","text":"

    Note

    On Percona XtraDB Cluster 8.0, SSL is enabled by default. To run the Galera Arbitrator, you must copy the SSL certificates and configure garbd to use the certificates.

    It is necessary to specify the cipher. In this example, it is AES128-SHA256. If you do not specify the cipher, an error occurs with a \u201cTerminate called after throwing an instance of \u2018gnu::NotSet\u2019\u201d message.

    For more information, see socket.ssl_cipher

    When starting from the shell, you can set the parameters from the command line or edit the configuration file. This is an example of starting from the command line:

    $ garbd --group=my_ubuntu_cluster \\\n--address=\"gcomm://192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\" \\\n--option=\"socket.ssl=YES; socket.ssl_key=/etc/ssl/mysql/server-key.pem; \\\nsocket.ssl_cert=/etc/ssl/mysql/server-cert.pem; \\\nsocket.ssl_ca=/etc/ssl/mysql/ca.pem; \\\nsocket.ssl_cipher=AES128-SHA256\"\n

    To avoid entering the options each time you start garbd, edit the options in the configuration file. To configure Galera Arbitrator on Ubuntu/Debian, edit the /etc/default/garb file. On RedHat or derivative distributions, the configuration can be found in /etc/sysconfig/garb file.

    The configuration file should look like this after the installation and before you have added your parameters:

    # Copyright (C) 2013-2015 Codership Oy\n# This config file is to be sourced by garb service script.\n\n# REMOVE THIS AFTER CONFIGURATION\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\n# GALERA_NODES=\"\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\n# GALERA_GROUP=\"\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"\"\n

    Add the parameter information about the cluster. For this document, we use the cluster information from Configuring Percona XtraDB Cluster on Ubuntu.

    Note

    Please note that you need to remove the # REMOVE THIS AFTER CONFIGURATION line before you can start the service.

    # This config file is to be sourced by garb service script.\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\nGALERA_NODES=\"192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\nGALERA_GROUP=\"my_ubuntu_cluster\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"socket.ssl_cert=/etc/ssl/mysql/server-key.pem;socket./etc/ssl/mysql/server-key.pem\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"/var/log/garbd.log\"\n

    You can now start the Galera Arbitrator daemon (garbd) by running:

    On Debian or UbuntuOn Red Hat Enterprise Linux or CentOS
    root@server:~# service garbd start\n
    Expected output
    [ ok ] Starting /usr/bin/garbd: :.\n

    Note

    On systems that run systemd as the default system and service manager, use systemctl instead of service to invoke the command. Currently, both are supported.

    root@server:~# systemctl start garb\n
    root@server:~# service garb start\n
    Expected output
    [ ok ] Starting /usr/bin/garbd: :.\n

    Additionally, you can check the arbitrator status by running:

    On Debian or UbuntuOn Red Hat Enterprise Linux or CentOS
    root@server:~# service garbd status\n
    Expected output
    [ ok ] garb is running.\n
    root@server:~# service garb status\n
    Expected output
    [ ok ] garb is running.\n
    "},{"location":"garbd-howto.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"gcache-record-set-cache-difference.html","title":"Understand GCache and Record-Set cache","text":"

    In Percona XtraDB Cluster, there is a concept of GCache and Record-Set cache (which can also be called transaction write-set cache). The use of these two caches is often confusing if you are running long transactions, because both of them result in the creation of disk-level files. This manual describes what their main differences are.

    "},{"location":"gcache-record-set-cache-difference.html#record-set-cache","title":"Record-Set cache","text":"

    When you run a long-running transaction on any particular node, it will try to append a key for each row that it tries to modify (the key is a unique identifier for the row {db,table,pk.columns}). This information is cached in out-write-set, which is then sent to the group for certification.

    Keys are cached in HeapStore (which has page-size=64K and total-size=4MB). If the transaction data-size outgrows this limit, then the storage is switched from Heap to Page (which has page-size=64MB and total-limit=free-space-on-disk).

    All these limits are non-configurable, but having a memory-page size greater than 4MB per transaction can cause things to stall due to memory pressure, so this limit is reasonable. This is another limitation to address when Galera supports large transaction.

    The same long-running transaction will also generate binlog data that also appends to out-write-set on commit (HeapStore->FileStore). This data can be significant, as it is a binlog image of rows inserted/updated/deleted by the transaction. The wsrep_max_ws_size variable controls the size of this part of the write-set. The threshold doesn\u2019t consider size allocated for caching-keys and the header.

    If FileStore is used, it creates a file on the disk (with names like xxxx_keys and xxxx_data) to store the cache data. These files are kept until a transaction is committed, so the lifetime of the transaction is linked.

    When the node is done with the transaction and is about to commit, it will generate the final-write-set using the two files (if the data size grew enough to use FileStore) plus HEADER, and will publish it for certification to cluster.

    The native node executing the transaction will also act as subscription node, and will receive its own write-set through the cluster publish mechanism. This time, the native node will try to cache write-set into its GCache. How much data GCache retains is controlled by the GCache configuration.

    "},{"location":"gcache-record-set-cache-difference.html#gcache","title":"GCache","text":"

    GCache holds the write-set published on the cluster for replication. The lifetime of write-set in GCache is not transaction-linked.

    When a JOINER node needs an IST, it will be serviced through this GCache (if possible).

    GCache will also create the files to disk. You can read more about it here.

    At any given point in time, the native node has two copies of the write-set: one in GCache and another in Record-Set Cache.

    For example, lets say you INSERT/UPDATE 2 million rows in a table with the following schema.

    (int, char(100), char(100) with pk (int, char(100))\n

    It will create write-set key/data files in the background similar to the following:

    -rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000000\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000001\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000002\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_keys.000000\n
    "},{"location":"gcache-record-set-cache-difference.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"gcache-write-set-cache-encryption.html","title":"GCache encryption and Write-Set cache encryption","text":"

    These features are tech preview. Before using these features in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    "},{"location":"gcache-write-set-cache-encryption.html#gcache-and-write-set-cache-encryption","title":"GCache and Write-Set cache encryption","text":"

    Enabling this feature encrypts the Galera GCache and Write-Set cache files with a File Key.

    GCache has a RingBuffer on-disk file to manage write-sets. The keyring only stores the Master Key which is used to encrypt the File Key used by the RingBuffer file. The encrypted File Key is stored in the RingBuffer\u2019s preamble. The RingBuffer file of GCache is non-volatile, which means this file survives a restart. The File Key is not stored for GCache off-pages and Write-Set cache files.

    See also

    For more information, see Understanding GCache and Record-set Cache, and the Percona Database Performance Blog: All you need to know about GCache

    Sample preamble key-value pairs
    Version: 2\nGID: 3afaa71d-6665-11ed-98de-2aba4aabc65e\nsynced: 0\nenc_version: 1\nenc_encrypted: 1\nenc_mk_id: 3\nenc_mk_const_id: 3ad045a2-6665-11ed-a49d-cb7b9d88753f\nenc_mk_uuid: 3ad04c8e-6665-11ed-a947-c7e346da147f\nenc_fk_id: S4hRiibUje4v5GSQ7a+uuS6NBBX9+230nsPHeAXH43k=\nenc_crc: 279433530\n
    "},{"location":"gcache-write-set-cache-encryption.html#key-descriptions","title":"Key descriptions","text":"

    The following table describes the encryption keys defined in the preamble. All other keys in the preamble are not related to encryption.

    Key Description enc_version The encryption version enc_encrypted If the GCache is encrypted or not enc_mk_id A part of the Master Key ID. Rotating the Master Key increments the sequence number. enc_mk_const_id A part of the Master Key ID, a constant Universally unique identifier (UUID). This option remains constant for the duration of the galera.gcache file and simplifies matching the Masater Key inside the keyring to the instance that generated the keys. Deleting the galera.gcache changes the value of this key. enc_mk_uuid The first Master Key or if Galera detects that the preamble is inconsistent, which causes a full GCache reset and a new Master Key is required, generates this UUID. enc_fk_id The File Key ID encrypted with the Master Key. enc_crc The cyclic redundancy check (CRC) calculated from all encryption-related keys."},{"location":"gcache-write-set-cache-encryption.html#controlling-encryption","title":"Controlling encryption","text":"

    Encryption is controlled using the wsrep_provider_options.

    Variable name Default value Allowed values gcache.encryption off on/off gcache.encryption_cache_page_size 32KB 2-512 gcache.encryption_cache_size 16MB 2 - 512 allocator.disk_pages_encryption off on/off allocator.encryption_cache_page_size 32KB allocator.encryption_cache_size 16MB"},{"location":"gcache-write-set-cache-encryption.html#rotate-the-gcache-master-key","title":"Rotate the GCache Master Key","text":"

    GCache and Write-Set cache encryption uses either a keyring plugin or a keyring component. This plugin or component must be loaded.

    Store the keyring file outside the data directory when using a keyring plugin or a keyring component.

    mysql> ALTER INSTANCE ROTATE GCACHE MASTER KEY;\n
    "},{"location":"gcache-write-set-cache-encryption.html#variable-descriptions","title":"Variable descriptions","text":""},{"location":"gcache-write-set-cache-encryption.html#gcache-encryption","title":"GCache encryption","text":"

    The following sections describe the variables related to GCache encryption. All variables are read-only.

    "},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption","title":"gcache.encryption","text":"

    Enable or disable GCache cache encryption.

    "},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_page_size","title":"gcache.encryption_cache_page_size","text":"

    The size of the GCache encryption page. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.

    "},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_size","title":"gcache.encryption_cache_size","text":"

    Every encrypted file has an encryption.cache, which consists of pages. Use gcache.encryption_cache_size to configure the encryption.cache size.

    Configure the page size in the cache with gcache.encryption_cache_page_size.

    The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.

    The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.

    "},{"location":"gcache-write-set-cache-encryption.html#write-set-cache-encryption","title":"Write-Set cache encryption","text":"

    The following sections describe the variables related to Write-Set cache encryption. All variables are read-only.

    "},{"location":"gcache-write-set-cache-encryption.html#allocatordisk_pages_encryption","title":"allocator.disk_pages_encryption","text":"

    Enable or disable the Write-Set cache encryption.

    "},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_page_size","title":"allocator.encryption_cache_page_size","text":"

    The size of the encryption cache for Write-Set pages. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.

    "},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_size","title":"allocator.encryption_cache_size","text":"

    Every Write-Set encrypted file has an encryption.cache, which consists of pages. Use allocator.encryption_cache_size to configure the size of the encryption.cache.

    Configure the page size in the cache with allocator.encryption_cache_page_size.

    The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.

    The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.

    "},{"location":"gcache-write-set-cache-encryption.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"get-started-cluster.html","title":"Get started with Percona XtraDB Cluster","text":"

    This guide describes the procedure for setting up Percona XtraDB Cluster.

    Examples provided in this guide assume there are three Percona XtraDB Cluster nodes, as a common choice for trying out and testing:

    Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63

    Note

    Avoid creating a cluster with two or any even number of nodes, because this can lead to split brain.

    The following procedure provides an overview with links to details for every step:

    It is recommended to install from official Percona repositories:

    This includes path to the Galera library, location of other nodes, etc.

    This must be the node with your main database, which will be used as the data source for the cluster.

    Data on new nodes joining the cluster is overwritten in order to synchronize it with the cluster.

    Although cluster initialization and node provisioning is performed automatically, it is a good idea to ensure that changes on one node actually replicate to other nodes.

    To complete the deployment of the cluster, a high-availability proxy is required. We recommend installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries.

    "},{"location":"get-started-cluster.html#percona-monitoring-and-management","title":"Percona Monitoring and Management","text":"

    Percona Monitoring and Management is the best choice for managing and monitoring Percona XtraDB Cluster performance. It provides visibility for the cluster and enables efficient troubleshooting.

    "},{"location":"get-started-cluster.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#frm","title":".frm","text":"

    For each table, the server will create a file with the .frm extension containing the table definition (for all storage engines).

    "},{"location":"glossary.html#acid","title":"ACID","text":"

    An acronym for Atomicity, Consistency, Isolation, Durability.

    "},{"location":"glossary.html#asynchronous-replication","title":"Asynchronous replication","text":"

    Asynchronous replication is a technique where data is first written to the primary node. After the primary acknowledges the write, the data is written to secondary nodes.

    "},{"location":"glossary.html#atomicity","title":"Atomicity","text":"

    This property guarantees that all updates of a transaction occur in the database or no updates occur. This guarantee also applies with a server exit. If a transaction fails, the entire operation rolls back.

    "},{"location":"glossary.html#cluster-replication","title":"Cluster replication","text":"

    Normal replication path for cluster members.\u00a0Can be encrypted (not by default) and unicast or multicast (unicast by default). Runs on tcp port 4567 by default.

    "},{"location":"glossary.html#consistency","title":"Consistency","text":"

    This property guarantees that each transaction that modifies the database takes it from one consistent state to another. Consistency is implied with Isolation.

    "},{"location":"glossary.html#datadir","title":"datadir","text":"

    The directory in which the database server stores its databases. Most Linux distribution use /var/lib/mysql by default.

    "},{"location":"glossary.html#donor-node","title":"donor node","text":"

    The node elected to provide a state transfer (SST or IST).

    "},{"location":"glossary.html#durability","title":"Durability","text":"

    Once a transaction is committed, it will remain so and is resistant to a server exit.

    "},{"location":"glossary.html#foreign-key","title":"Foreign Key","text":"

    A referential constraint between two tables. Example: A purchase order in the purchase_orders table must have been made by a customer that exists in the customers table.

    "},{"location":"glossary.html#general-availability-ga","title":"General availability (GA)","text":"

    A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.

    "},{"location":"glossary.html#gtid","title":"GTID","text":"

    Global Transaction ID, in Percona XtraDB Cluster it consists of UUID and an ordinal sequence number which denotes the position of the change in the sequence.

    "},{"location":"glossary.html#haproxy","title":"HAProxy","text":"

    HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the net.

    "},{"location":"glossary.html#ibdata","title":"ibdata","text":"

    Default prefix for tablespace files, e.g., ibdata1 is a 10MB auto-extendable file that MySQL creates for the shared tablespace by default.

    "},{"location":"glossary.html#isolation","title":"Isolation","text":"

    The Isolation guarantee means that no transaction can interfere with another. When transactions access data in a session, they also lock that data to prevent other operations on that data by other transaction.

    "},{"location":"glossary.html#ist","title":"IST","text":"

    Incremental State Transfer. Functionality which instead of whole state snapshot can catch up with the group by receiving the missing writesets, but only if the writeset is still in the donor\u2019s writeset cache.

    "},{"location":"glossary.html#innodb","title":"InnoDB","text":"

    Storage Engine for MySQL and derivatives (Percona Server, MariaDB) originally written by Innobase Oy, since acquired by Oracle. It provides ACID compliant storage engine with foreign key support. InnoDB is the default storage engine on all platforms.

    "},{"location":"glossary.html#jenkins","title":"Jenkins","text":"

    Jenkins is a continuous integration system that we use to help ensure the continued quality of the software we produce. It helps us achieve the aims of: * no failed tests in trunk on any platform * aid developers in ensuring merge requests build and test on all platforms * no known performance regressions (without a damn good explanation)

    "},{"location":"glossary.html#joiner-node","title":"joiner node","text":"

    The node joining the cluster, usually a state transfer target.

    "},{"location":"glossary.html#lsn","title":"LSN","text":"

    Log Serial Number. A term used in relation to the InnoDB or XtraDB storage engines. There are System-level LSNs and Page-level LSNs. The System LSN represents the most recent LSN value assigned to page changes. Each InnoDB page contains a Page LSN which is the max LSN for that page for changes that reside on the disk. This LSN is updated when the page is flushed to disk.

    "},{"location":"glossary.html#mariadb","title":"MariaDB","text":"

    A fork of MySQL that is maintained primarily by Monty Program AB. It aims to add features, fix bugs while maintaining 100% backwards compatibility with MySQL.

    "},{"location":"glossary.html#mycnf","title":"my.cnf","text":"

    This file refers to the database server\u2019s main configuration file. Most Linux distributions place it as /etc/mysql/my.cnf or /etc/my.cnf, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.

    "},{"location":"glossary.html#myisam","title":"MyISAM","text":"

    A MySQL Storage Engine that was the default until MySQL 5.5. It doesn\u2019t fully support transactions but in some scenarios may be faster than InnoDB. Each table is stored on disk in 3 files: .frm,i .MYD, .MYI.

    "},{"location":"glossary.html#mysql","title":"MySQL","text":"

    An open source database that has spawned several distributions and forks. MySQL AB was the primary maintainer and distributor until bought by Sun Microsystems, which was then acquired by Oracle. As Oracle owns the MySQL trademark, the term MySQL is often used for the Oracle distribution of MySQL as distinct from the drop-in replacements such as MariaDB and Percona Server.

    "},{"location":"glossary.html#mysqlpxcinternalsession","title":"mysql.pxc.internal.session","text":"

    This user is used by the SST process to run the SQL commands needed for SST, such as creating the mysql.pxc.sst.user and assigning it the role mysql.pxc.sst.role.

    "},{"location":"glossary.html#mysqlpxcsstrole","title":"mysql.pxc.sst.role","text":"

    This role has all the privileges needed to run xtrabackup to create a backup on the donor node.

    "},{"location":"glossary.html#mysqlpxcsstuser","title":"mysql.pxc.sst.user","text":"

    This user (set up on the donor node) is assigned the mysql.pxc.sst.role and runs the XtraBackup to make backups. The password for this is randomly generated for each SST. The password is generated automatically for each SST.

    "},{"location":"glossary.html#node","title":"node","text":"

    A cluster node \u2013 a single mysql instance that is in the cluster.

    "},{"location":"glossary.html#numa","title":"NUMA","text":"

    Non-Uniform Memory Access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. The whole system may still operate as one unit, and all memory is basically accessible from everywhere, but at a potentially higher latency and lower performance.

    "},{"location":"glossary.html#percona-server-for-mysql","title":"Percona Server for MySQL","text":"

    Percona\u2019s branch of MySQL with performance and management improvements.

    "},{"location":"glossary.html#percona-xtradb-cluster","title":"Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster (PXC) is a high availability solution for MySQL.

    "},{"location":"glossary.html#primary-cluster","title":"primary cluster","text":"

    A cluster with quorum.\u00a0A non-primary cluster will not allow any operations and will give Unknown command errors on any clients attempting to read or write from the database.

    "},{"location":"glossary.html#quorum","title":"quorum","text":"

    A majority (> 50%) of nodes.\u00a0In the event of a network partition, only the cluster partition that retains a quorum (if any) will remain Primary by default.

    "},{"location":"glossary.html#split-brain","title":"split brain","text":"

    Split brain occurs when two parts of a computer cluster are disconnected, each part believing that the other is no longer running. This problem can lead to data inconsistency.

    "},{"location":"glossary.html#sst","title":"SST","text":"

    State Snapshot Transfer is the full copy of data from one node to another. It\u2019s used when a new node joins the cluster, it has to transfer data from an existing node. Percona XtraDB Cluster: uses the xtrabackup program for this purpose. xtrabackup does not require READ LOCK for the entire syncing process - only for syncing the MySQL system tables and writing the information about the binlog, galera and replica information (same as the regular Percona XtraBackup backup).

    The SST method is configured with the wsrep_sst_method variable.

    In PXC 8.0, the mysql-upgrade command is now run automatically as part of SST. You do not have to run it manually when upgrading your system from an older version.

    "},{"location":"glossary.html#storage-engine","title":"Storage Engine","text":"

    A Storage Engine is a piece of software that implements the details of data storage and retrieval for a database system. This term is primarily used within the MySQL ecosystem due to it being the first widely used relational database to have an abstraction layer around storage. It is analogous to a Virtual File System layer in an Operating System. A VFS layer allows an operating system to read and write multiple file systems (for example, FAT, NTFS, XFS, ext3) and a Storage Engine layer allows a database server to access tables stored in different engines (e.g. MyISAM, InnoDB).

    "},{"location":"glossary.html#tech-preview","title":"Tech preview","text":"

    A tech preview item can be a feature, a variable, or a value within a variable. The term designates that the item is not yet ready for production use and is not included in support by SLA. A tech preview item is included in a release so that users can provide feedback. The item is either updated and released as general availability(GA) or removed if not useful. The item\u2019s functionality can change from tech preview to GA.

    "},{"location":"glossary.html#uuid","title":"UUID","text":"

    Universally Unique IDentifier which uniquely identifies the state and the sequence of changes node undergoes. 128-bit UUID is a classic DCE UUID Version 1 (based on current time and MAC address). Although in theory this UUID could be generated based on the real MAC-address, in the Galera it is always (without exception) based on the generated pseudo-random addresses (\u201clocally administered\u201d bit in the node address (in the UUID structure) is always equal to unity).

    "},{"location":"glossary.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"haproxy-config.html","title":"HAProxy configuration file","text":""},{"location":"haproxy-config.html#example-of-haproxy-v1-configuration-file","title":"Example of HAProxy v1 configuration file","text":"HAProxy v1 configuration file
    global\n        log 127.0.0.1   local0\n        log 127.0.0.1   local1 notice\n        maxconn 4096\n        uid 99\n        gid 99\n        daemon\n        #debug\n        #quiet\n\ndefaults\n        log     global\n        mode    http\n        option  tcplog\n        option  dontlognull\n        retries 3\n        redispatch\n        maxconn 2000\n        contimeout      5000\n        clitimeout      50000\n        srvtimeout      50000\n        timeout connect 160000\n        timeout client 240000\n        timeout server 240000\n\nlisten mysql-cluster 0.0.0.0:3306\n    mode tcp\n    balance roundrobin\n    option mysql-check user root\n\n    server db01 10.4.29.100:3306 check\n    server db02 10.4.29.99:3306 check\n    server db03 10.4.29.98:3306 check\n

    Options set in the configuration file

    "},{"location":"haproxy-config.html#differences-between-version-1-configuration-file-and-version-2-configuration-file","title":"Differences between version 1 configuration file and version 2 configuration file","text":""},{"location":"haproxy-config.html#version-declaration","title":"Version Declaration:","text":"

    v1: The configuration file typically omits an explicit version declaration.

    v2: You must explicitly declare the version using the version keyword followed by the specific version number (e.g., version = 2.0).

    "},{"location":"haproxy-config.html#global-parameters","title":"Global Parameters:","text":"

    v1 and v2: Both versions utilize a global section to define global parameters, but certain parameters might have different names or functionalities across versions. Refer to the official documentation for specific changes.

    "},{"location":"haproxy-config.html#configuration-blocks","title":"Configuration Blocks:","text":"

    v1 and v2: Both versions use a similar indentation-based structure to define configuration blocks like frontend and backend. However, v2 introduces new blocks and keywords not present in v1 (e.g., process, http-errors).

    "},{"location":"haproxy-config.html#directives","title":"Directives:","text":"

    v1 and v2: While many directives remain consistent, some might have renamed keywords, altered syntax, or entirely new functionalities in v2. Consult the official documentation for a comprehensive comparison of directives and their usage between versions.

    "},{"location":"haproxy-config.html#comments","title":"Comments:","text":"

    v1 and v2: Both versions support comments using the # symbol. However, v2 introduces multi-line comments using / \u2026 / syntax, which v1 does not support.

    "},{"location":"haproxy-config.html#version-2-configuration-file","title":"Version 2 configuration file","text":"

    This simplified example is for load balancing. HAProxy offers numerous features for advanced configurations and fine-tuning.

    This example demonstrates a basic HAProxy v2 configuration file for load-balancing HTTP traffic across two backend servers.

    "},{"location":"haproxy-config.html#global-section","title":"Global Section","text":"

    The following settings are defined in the Global section:

    In the defaults block, we set the operating mode to TCP and define option tcpka

    global\n    maxconn 4000           # Maximum concurrent connections (adjust as needed)\n    user haproxy          # User to run HAProxy process\n    group haproxy          # Group to run HAProxy process\n    stats socket /var/run/haproxy.sock mode 666 level admin\n\ndefaults\n    mode tcp             # Set operating mode to TCP\n    #option tcpka\n
    "},{"location":"haproxy-config.html#frontend-section","title":"Frontend Section","text":"

    The following settings are defined in this section:

    frontend gr-prod-rw\n    bind 0.0.0.0:3307     \n    mode tcp\n    option contstats\n    option dontlognull\n    option clitcpka\n    default_backend gr-prod-rw\n

    You should add the following options:

    option Description contstats Provides continuous updates to the statistics of your connections. This option ensures that your traffic counters are updated in real-time, rather than only after a connection closes, giving you a more accurate and immediate view of your traffic patterns. dontlognull Does not log requests that don\u2019t transfer any data, like health check pings. clitcpka Configures TCP keepalive settings for client connections. This option allows the operating system to detect and terminate inactive connections, even if HAProxy isn\u2019t actively checking them."},{"location":"haproxy-config.html#backend-section","title":"Backend Section","text":"

    In this section, you specify the backend servers that will handle requests forwarded by the frontend. List each server with their respective IP addresses, ports, and weights.

    You set up a health check with check inter 10000. This option means that HAProxy performs a health check on each server every 10,000 milliseconds or 10 seconds. If a server fails a health check, it is temporarily removed from the pool until it passes subsequent checks, ensuring smooth and reliable client service. This proactive monitoring is crucial for maintaining an efficient and uninterrupted backend service.

    Set the number of retries to put the service down and up. For example, you set the rise parameter to 1, which means the server only needs to pass one health check before the server is considered healthy again. The fall parameter is set to 2, requiring two consecutive failed health checks before the server is marked as unhealthy.

    The weight 50 backup setting is crucial for load balancing; this setting determines that this server only receives traffic if the primary servers are down. The weight of 50 indicates the relative amount of traffic the server will handle compared to other servers in the backup role. This method ensures the server can handle a significant load even in backup mode, but not as much as a primary server.

    The following example lists these options. Replace the server details (IP addresses, ports) with your backend server information. Adjust weights and other options according to your specific needs and server capabilities.

    backend servers\n    server server1 10.0.68.39:3307 check inter 10000 rise 1 fall 2 weight 50\n    server server1 10.0.68.74:3307 check inter 10000 rise 1 fall 2 weight 50 backup\n    server server1 10.0.68.20:3307 check inter 10000 rise 1 fall 2 weight 1 backup\n

    More information about how to configure HAProxy

    "},{"location":"haproxy-config.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"haproxy.html","title":"Load balancing with HAProxy","text":"

    The free and open source software, HAProxy, provides a high-availability load balancer and reverse proxy for TCP and HTTP-based applications. HAProxy can distribute requests across multiple servers, ensuring optimal performance and security.

    Here are the benefits of using HAProxy:

    "},{"location":"haproxy.html#create-a-user","title":"Create a user","text":"

    Access the server as a user with administrative privileges, either root or use sudo.

    Create a Dedicated HAProxy user account for HAProxy to interact with your MySQL instance. This account enhances security.

    Make the following changes to the example CREATE USER command to replace the placeholders:

    Execute the following command:

    mysql> CREATE USER 'haproxy_user'@'haproxy_server_ip' IDENTIFIED BY 'strong_password';\n

    Grant the minimal set of privileges necessary for HAProxy to perform its health checks and monitoring.

    Execute the following:

    GRANT SELECT ON `mysql`.* TO 'haproxy_user'@'haproxy_server_ip';\nFLUSH PRIVILEGES;\n
    "},{"location":"haproxy.html#important-considerations","title":"Important Considerations","text":"

    If your MySQL servers are part of a replication cluster, create the user and grant privileges on each node to ensure consistency.

    For enhanced security, consider restricting the haproxy_user to specific databases or tables to monitor rather than granting permissions to the entire mysql database schema.

    "},{"location":"haproxy.html#install","title":"Install","text":"

    Add the HAProxy Enterprise repository to your system by following the instructions for your operating system.

    Install HAProxy on the node you intend to use for load balancing. You can install it using the package manager.

    On a Debian-derived distributionOn a Red Hat-derived distribution
    $ sudo apt update\n$ sudo apt install haproxy\n
    $ sudo yum update\n$ sudo yum install haproxy\n

    To start HAProxy, use the haproxy command. You may pass any number of configuration parameters on the command line. To use a configuration file, add the -f option.

    $ # Passing one configuration file\n$ sudo haproxy -f haproxy-1.cfg\n\n$ # Passing multiple configuration files\n$ sudo haproxy -f haproxy-1.cfg haproxy-2.cfg\n\n$ # Passing a directory\n$ sudo haproxy -f conf-dir\n

    You can pass the name of an existing configuration file or a directory. HAProxy includes all files with the .cfg extension in the supplied directory. Another way to pass multiple files is to use -f multiple times.

    For more information, see HAProxy Management Guide

    For information, see HAProxy configuration file

    Important

    In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password. HAProxy does not support this authentication plugin. Create a mysql user using the mysql_native_password authentication plugin.

    mysql> CREATE USER 'haproxy_user'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\n

    See also

    MySQL Documentation: CREATE USER statement

    "},{"location":"haproxy.html#uninstall","title":"Uninstall","text":"

    To uninstall haproxy version 2 from a Linux system, follow the latest instructions.

    "},{"location":"haproxy.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"high-availability.html","title":"High availability","text":"

    In a basic setup with three nodes, if you take any of the nodes down, Percona XtraDB Cluster continues to function. At any point, you can shut down any node to perform maintenance or configuration changes.

    Even in unplanned situations (like a node crashing or if it becomes unavailable over the network), you can run queries on working nodes. If a node is down and the data has changed, there are two methods that the node may use when it joins the cluster again:

    Method What happens Description SST The joiner node receives a full copy of the database state from the donor node. You initiate a Solid State Transfer (SST) when adding a new node to a Galera cluster or when a node has fallen too far out of sync IST Only incremental changes are copied from one node to another. This operation can be used when a node is down for a short period."},{"location":"high-availability.html#sst","title":"SST","text":"

    The primary benefit of SST is that it ensures data consistency across the cluster by providing a complete snapshot of the database at a point in time. However, SST can be resource-intensive and time-consuming if the operation transfers significant data. The donor node is locked during this transfer, impacting cluster performance.

    You initiate a state snapshot transfer (SST) when a node joins a cluster without the complete data set. This process involves transferring a full data copy from one node to another, ensuring that the joining node has an exact replica of the cluster\u2019s current state. Technically, SST is performed by halting the donor node\u2019s database operations momentarily to create a consistent snapshot of its data. The snapshot is then transferred over the network to the joining node, which applies it to its database system.

    Even without locking your cluster in a read-only state, SST may be intrusive and disrupt the regular operation of your services. IST avoids disruption. A node fetches only the changes that happened while that node was unavailable. IST uses a caching mechanism on nodes.

    "},{"location":"high-availability.html#ist","title":"IST","text":"

    Incremental State Transfer (IST) is a method that allows a node to request only the missing transactions from another node in the cluster. This process is beneficial because it reduces the amount of data that must be transferred, leading to faster recovery times for nodes that are out of sync. Additionally, IST minimizes the network bandwidth required for state transfer, which is particularly advantageous in environments with limited resources.

    However, there are drawbacks to consider. Reliance on another node\u2019s state means that an SST operation is necessary if no node in the cluster has the required information.

    When a node joins the cluster with a state slightly behind the current cluster state, IST does not require the joining node to copy the entire database state. Technically, IST transfers only the missing write-sets that the joining node needs to catch up with the cluster. The donor node, the node with the most recent state, sends the write-sets to the joining node through a dedicated channel. The joining node then applies these write-sets to its database state incrementally until it synchronizes with the cluster\u2019s current state. The donor node can experience a performance impact during an IST operation, typically less severe than during SST.

    "},{"location":"high-availability.html#monitor-the-node-state","title":"Monitor the node state","text":"

    The wsrep_state_comment variable returns the current state of a Galera node in the cluster, providing information about the node\u2019s role and status. The value can vary depending on the specific state of the Galera node, such as the following:

    You can monitor the current state of a node using the following command:

    mysql> SHOW STATUS LIKE 'wsrep_local_state_comment';\n

    If the node is in Synced (6) state, that node is part of the cluster and can handle the traffic.

    "},{"location":"high-availability.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"install-index.html","title":"Install Percona XtraDB Cluster","text":"

    Install Percona XtraDB Cluster on all hosts that you are planning to use as cluster nodes and ensure that you have root access to the MySQL server on each one.

    We gather Telemetry data in the Percona packages and Docker images.

    "},{"location":"install-index.html#ports-required","title":"Ports required","text":"

    Open specific ports for the Percona XtraDB Cluster to function correctly.

    "},{"location":"install-index.html#recommendations","title":"Recommendations","text":"

    We recommend installing Percona XtraDB Cluster from official Percona software repositories using the corresponding package manager for your system:

    Important

    After installing Percona XtraDB Cluster, the mysql service is stopped but enabled so that it may start the next time you restart the system. The service starts if the the grastate.dat file exists and the value of seqno is not -1.

    See also

    More information about Galera state information in Index of files created by PXC grastat.dat

    "},{"location":"install-index.html#installation-alternatives","title":"Installation alternatives","text":"

    Percona also provides a generic tarball with all required files and binaries for manual installation:

    If you want to build Percona XtraDB Cluster from source, see Compiling and Installing from Source Code.

    If you want to run Percona XtraDB Cluster using Docker, see Running Percona XtraDB Cluster in a Docker Container.

    "},{"location":"install-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"intro.html","title":"About Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster is a fully open-source high-availability solution for MySQL. It integrates Percona Server for MySQL and Percona XtraBackup with the Galera library to enable synchronous multi-source replication.

    A cluster consists of nodes, where each node contains the same set of data synchronized accross nodes. The recommended configuration is to have at least 3 nodes, but you can have 2 nodes as well. Each node is a regular MySQL Server instance (for example, Percona Server). You can convert an existing MySQL Server instance to a node and run the cluster using this node as a base. You can also detach any node from the cluster and use it as a regular MySQL Server instance.

    "},{"location":"intro.html#benefits","title":"Benefits","text":""},{"location":"intro.html#drawbacks","title":"Drawbacks","text":""},{"location":"intro.html#components","title":"Components","text":"

    Percona XtraDB Cluster https://www.percona.com/software/mysql-database/percona-xtradb-cluster is based on Percona Server for MySQL running with the XtraDB storage engine. It uses the Galera library, which is an implementation of the write set replication (wsrep) API developed by Codership Oy. The default and recommended data transfer method is via Percona XtraBackup .

    "},{"location":"intro.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"limitation.html","title":"Percona XtraDB Cluster limitations","text":"

    The following limitations apply to Percona XtraDB Cluster:

    As of version 8.0.21, an INPLACE ALTER TABLE query takes an internal shared lock on the table during the execution of the query. The LOCK=NONE clause is no longer allowed for all of the INPLACE ALTER TABLE queries due to this change.

    This change addresses a deadlock, which could cause a cluster node to hang in the following scenario:

    Do not use one or more dot characters (.) when defining the values for the following variables:

    MySQL and XtraBackup handles the value in different ways and this difference causes unpredictable behavior.

    "},{"location":"limitation.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"load-balance-proxysql.html","title":"Load balance with ProxySQL","text":"

    ProxySQL is a high-performance SQL proxy. ProxySQL runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.

    The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.

    The proxy is designed to run continuously without needing to be restarted. Most configuration can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.

    See also

    ProxySQL Documentation

    ProxySQL v2 natively supports Percona XtraDB Cluster. With this version, proxysql-admin tool does not require any custom scripts to keep track of Percona XtraDB Cluster status.

    Important

    In version 8.0, Percona XtraDB Cluster does not support ProxySQL v1.

    "},{"location":"load-balance-proxysql.html#manual-configuration","title":"Manual configuration","text":"

    This section describes how to configure ProxySQL with three Percona XtraDB Cluster nodes.

    Node Host Name IP address Node 1 pxc1 192.168.70.71 Node 2 pxc2 192.168.70.72 Node 3 pxc3 192.168.70.73 Node 4 proxysql 192.168.70.74

    ProxySQL can be configured either using the /etc/proxysql.cnf file or through the admin interface. The admin interface is recommended because this interface can dynamically change the configuration without restarting the proxy.

    To connect to the ProxySQL admin interface, you need a mysql client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally. For this tutorial, install Percona XtraDB Cluster on Node 4:

    Changes in the installation procedure

    In Percona XtraDB Cluster 8.0, ProxySQL is not installed automatically as a dependency of the percona-xtradb-cluster-client-8.0 package. You should install the proxysql package separately.

    Note

    ProxySQL has multiple versions in the version 2 series.

    root@proxysql:~# apt install percona-xtradb-cluster-client\nroot@proxysql:~# apt install proxysql2\n
    $ sudo yum install Percona-XtraDB-Cluster-client-80\n$ sudo yum install proxysql2\n

    To connect to the admin interface, use the credentials, host name and port specified in the global variables.

    Warning

    Do not use default credentials in production!

    The following example shows how to connect to the ProxySQL admin interface with default credentials:

    root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql@proxysql>\n

    To see the ProxySQL databases and tables use the following commands:

    mysql@proxysql> SHOW DATABASES;\n

    The following output shows the ProxySQL databases:

    Expected output
    +-----+---------+-------------------------------+\n| seq | name    | file                          |\n+-----+---------+-------------------------------+\n| 0   | main    |                               |\n| 2   | disk    | /var/lib/proxysql/proxysql.db |\n| 3   | stats   |                               |\n| 4   | monitor |                               |\n+-----+---------+-------------------------------+\n4 rows in set (0.00 sec)\n
    mysql@proxysql> SHOW TABLES;\n

    The following output shows the ProxySQL tables:

    Expected output
    +--------------------------------------+\n| tables                               |\n+--------------------------------------+\n| global_variables                     |\n| mysql_collations                     |\n| mysql_query_rules                    |\n| mysql_replication_hostgroups         |\n| mysql_servers                        |\n| mysql_users                          |\n| runtime_global_variables             |\n| runtime_mysql_query_rules            |\n| runtime_mysql_replication_hostgroups |\n| runtime_mysql_servers                |\n| runtime_scheduler                    |\n| scheduler                            |\n+--------------------------------------+\n12 rows in set (0.00 sec)\n

    For more information about admin databases and tables, see Admin Tables

    Note

    The ProxySQL configuration can reside in the following areas:

    When you change a parameter, you change it in MEMORY area. This ability is by design and lets you test the changes before pushing the change to production (RUNTIME), or save the change to disk.

    "},{"location":"load-balance-proxysql.html#add-cluster-nodes-to-proxysql","title":"Add cluster nodes to ProxySQL","text":"

    To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.

    Note

    ProxySQL uses the concept of hostgroups to group cluster nodes. This enables you to balance the load in a cluster by routing different types of traffic to different groups. There are many ways you can configure hostgroups (for example, source and replicas, read and write load, etc.) and a every node can be a member of multiple hostgroups.

    This example adds three Percona XtraDB Cluster nodes to the default hostgroup (0), which receives both write and read traffic:

    mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.71',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.72',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.73',3306);\n

    To see the nodes:

    mysql@proxysql> SELECT * FROM mysql_servers;\n

    The following output shows the list of nodes:

    Expected output
    +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| hostgroup_id | hostname      | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| 0            | 192.168.70.71 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |\n| 0            | 192.168.70.72 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |\n| 0            | 192.168.70.73 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n3 rows in set (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#create-proxysql-monitoring-user","title":"Create ProxySQL monitoring user","text":"

    To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE privilege on any node in the cluster and configure the user in ProxySQL.

    The following example shows how to add a monitoring user on Node 2:

    mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\nmysql@pxc2> GRANT USAGE ON *.* TO 'proxysql'@'%';\n

    The following example shows how to configure this user on the ProxySQL node:

    mysql@proxysql> UPDATE global_variables SET variable_value='proxysql'\n              WHERE variable_name='mysql-monitor_username';\nmysql@proxysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\n              WHERE variable_name='mysql-monitor_password';\n

    To load this configuration at runtime, issue a LOAD command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue a SAVE command.

    mysql@proxysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql@proxysql> SAVE MYSQL VARIABLES TO DISK;\n

    To ensure that monitoring is enabled, check the monitoring logs:

    mysql@proxysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+----------------------+---------------+\n| hostname      | port | time_start_us    | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627                 | NULL          |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447                 | NULL          |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
    mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+-------------------+------------+\n| hostname      | port | time_start_us    | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948               | NULL       |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803               | NULL       |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711               | NULL       |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783               | NULL       |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631               | NULL       |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542               | NULL       |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n

    The previous examples show that ProxySQL is able to connect and ping the nodes you have added.

    To enable monitoring of these nodes, load them at runtime:

    mysql@proxysql> LOAD MYSQL SERVERS TO RUNTIME;\n
    "},{"location":"load-balance-proxysql.html#create-proxysql-client-user","title":"Create ProxySQL client user","text":"

    ProxySQL must have users that can access backend nodes to manage connections.

    To add a user, insert credentials into mysql_users table:

    mysql@proxysql> INSERT INTO mysql_users (username,password) VALUES ('sbuser','sbpass');\n
    Expected output
    Query OK, 1 row affected (0.00 sec)\n

    Note

    ProxySQL currently doesn\u2019t encrypt passwords.

    Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):

    mysql@proxysql> LOAD MYSQL USERS TO RUNTIME;\nmysql@proxysql> SAVE MYSQL USERS TO DISK;\n

    To confirm that the user has been set up correctly, you can try to log in as root:

    root@proxysql:~# mysql -u sbuser -psbpass -h 127.0.0.1 -P 6033\n
    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n

    To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

    mysql@pxc3> CREATE USER 'sbuser'@'192.168.70.74' IDENTIFIED BY 'sbpass';\n
    Expected output
    Query OK, 0 rows affected (0.01 sec)\n
    mysql@pxc3> GRANT ALL ON *.* TO 'sbuser'@'192.168.70.74';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#test-cluster-with-sysbench","title":"Test cluster with sysbench","text":"

    You can install sysbench from Percona software repositories:

    root@proxysql:~# apt install sysbench\n
    root@proxysql:~# yum install sysbench\n

    Note

    sysbench requires ProxySQL client user credentials that you created in Creating ProxySQL Client User.

    1. Create the database that will be used for testing on one of the Percona XtraDB Cluster nodes:

      mysql@pxc1> CREATE DATABASE sbtest;\n
    2. Populate the table with data for the benchmark on the ProxySQL node:

      root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nprepare\n
    3. Run the benchmark on the ProxySQL node:

      root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nrun\n

    ProxySQL stores collected data in the stats schema:

    mysql@proxysql> SHOW TABLES FROM stats;\n
    Expected output
    +--------------------------------+\n| tables                         |\n+--------------------------------+\n| stats_mysql_query_rules        |\n| stats_mysql_commands_counters  |\n| stats_mysql_processlist        |\n| stats_mysql_connection_pool    |\n| stats_mysql_query_digest       |\n| stats_mysql_query_digest_reset |\n| stats_mysql_global             |\n+--------------------------------+\n

    For example, to see the number of commands that run on the cluster:

    mysql@proxysql> SELECT * FROM stats_mysql_commands_counters;\n
    Expected output
    +---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| Command                   | Total_Time_us | Total_cnt | cnt_100us | cnt_500us | cnt_1ms | cnt_5ms | cnt_10ms | cnt_50ms | cnt_100ms | cnt_500ms | cnt_1s | cnt_5s | cnt_10s | cnt_INFs |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| ALTER_TABLE               | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| ANALYZE_TABLE             | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| BEGIN                     | 2212625       | 3686      | 55        | 2162      | 899     | 569     | 1        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| CHANGE_REPLICATION_SOURCE | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| COMMIT                    | 21522591      | 3628      | 0         | 0         | 0       | 1765    | 1590     | 272      | 1         | 0         | 0      | 0      | 0       | 0        |\n| CREATE_DATABASE           | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| CREATE_INDEX              | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n...\n| DELETE                    | 2904130       | 3670      | 35        | 1546      | 1346    | 723     | 19       | 1        | 0         | 0         | 0      | 0      | 0       | 0        |\n| DESCRIBE                  | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n...\n| INSERT                    | 19531649      | 3660      | 39        | 1588      | 1292    | 723     | 12       | 2        | 0         | 1         | 0      | 1      | 2       | 0        |\n...\n| SELECT                    | 35049794      | 51605     | 501       | 26180     | 16606   | 8241    | 70       | 3        | 4         | 0         | 0      | 0      | 0       | 0        |\n| SELECT_FOR_UPDATE         | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n...\n| UPDATE                    | 6402302       | 7367      | 75        | 2503      | 3020    | 1743    | 23       | 3        | 0         | 0         | 0      | 0      | 0       | 0        |\n| USE                       | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| SHOW                      | 19691         | 2         | 0         | 0         | 0       | 0       | 1        | 1        | 0         | 0         | 0      | 0      | 0       | 0        |\n| UNKNOWN                   | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n45 rows in set (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#automatic-failover","title":"Automatic failover","text":"

    ProxySQL will automatically detect if a node is not available or not synced with the cluster.

    You can check the status of all available nodes by running:

    mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n

    The following output shows the status of all available nodes:

    Expected output
    +--------------+---------------+------+--------+\n| hostgroup_id | hostname      | port | status |\n+--------------+---------------+------+--------+\n| 0            | 192.168.70.71 | 3306 | ONLINE |\n| 0            | 192.168.70.72 | 3306 | ONLINE |\n| 0            | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n

    To test problem detection and fail-over mechanism, shut down Node 3:

    root@pxc3:~# service mysql stop\n

    ProxySQL will detect that the node is down and update its status to OFFLINE_SOFT:

    mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
    Expected output
    +--------------+---------------+------+--------------+\n| hostgroup_id | hostname      | port | status       |\n+--------------+---------------+------+--------------+\n| 0            | 192.168.70.71 | 3306 | ONLINE       |\n| 0            | 192.168.70.72 | 3306 | ONLINE       |\n| 0            | 192.168.70.73 | 3306 | OFFLINE_SOFT |\n+--------------+---------------+------+--------------+\n3 rows in set (0.00 sec)\n

    Now start Node 3 again:

    root@pxc3:~# service mysql start\n

    The script will detect the change and mark the node as ONLINE:

    mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
    Expected output
    +--------------+---------------+------+--------+\n| hostgroup_id | hostname      | port | status |\n+--------------+---------------+------+--------+\n| 0            | 192.168.70.71 | 3306 | ONLINE |\n| 0            | 192.168.70.72 | 3306 | ONLINE |\n| 0            | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#assisted-maintenance-mode","title":"Assisted maintenance mode","text":"

    Usually, to take a node down for maintenance, you need to identify that node, update its status in ProxySQL to OFFLINE_SOFT, wait for ProxySQL to divert traffic from this node, and then initiate the shutdown or perform maintenance tasks. Percona XtraDB Cluster includes a special maintenance mode for nodes that enables you to take a node down without adjusting ProxySQL manually.

    Initiating pxc_maint_mode=MAINTENANCE does not disconnect existing connections. You must terminate these connections by either running your application code or forcing a re-connection. With a re-connection, the new connections are re-routed around the PXC node in MAINTENANCE mode.

    Assisted maintenance mode is controlled via the pxc_maint_mode variable, which is monitored by ProxySQL and can be set to one of the following values:

    Related sections

    Setting up a testing environment with ProxySQL

    "},{"location":"load-balance-proxysql.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"monitoring.html","title":"Monitor the cluster","text":"

    Each node can have a different view of the cluster. There is no centralized node to monitor. To track down the source of issues, you have to monitor each node independently.

    Values of many variables depend on the node from which you are querying. For example, replication sent from a node and writes received by all other nodes.

    Having data from all nodes can help you understand where flow messages are coming from, which node sends excessively large transactions, and so on.

    "},{"location":"monitoring.html#manual-monitoring","title":"Manual monitoring","text":"

    Manual cluster monitoring can be performed using myq-tools.

    "},{"location":"monitoring.html#alerting","title":"Alerting","text":"

    Besides standard MySQL alerting, you should use at least the following triggers specific to Percona XtraDB Cluster:

    wsrep_cluster_status != Primary

    wsrep_connected != ON

    wsrep_ready != ON

    For additional alerting, consider the following:

    "},{"location":"monitoring.html#metrics","title":"Metrics","text":"

    Cluster metrics collection for long-term graphing should be done at least for the following:

    wsrep_local_recv_queue and wsrep_local_send_queue

    wsrep_flow_control_sent and wsrep_flow_control_recv

    wsrep_replicated and wsrep_received

    wsrep_replicated_bytes and wsrep_received_bytes

    wsrep_local_cert_failures and wsrep_local_bf_aborts

    "},{"location":"monitoring.html#use-percona-monitoring-and-management","title":"Use Percona Monitoring and Management","text":"

    Percona Monitoring and Management includes two dashboards to monitor PXC:

    1. PXC/Galera Cluster Overview:

    2. PXC/Galera Graphs:

      These dashboards are available from the menu:

    Please refer to the official documentation for details on Percona Monitoring and Management installation and setup.

    "},{"location":"monitoring.html#other-reading","title":"Other reading","text":""},{"location":"monitoring.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"nbo.html","title":"Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU)","text":"

    An Online Schema Upgrade can be a daily issue in an environment with accelerated development and deployment. The task becomes more difficult as the data grows. An ALTER TABLE statement is a multi-step operation and must run until it is complete. Aborting the statement may be more expensive than letting it complete.

    The Non-Blocking Operations (NBO) method is similar to the TOI method (see Online Schema Upgrade for more information on the available types of online schema upgrades). Every replica processes the DDL statement at the same point in the cluster transaction stream, and other transactions cannot commit during the operation. The NBO method provides a more efficient locking strategy and avoids the TOI issue of long-running DDL statements blocking cluster updates.

    In the NBO method, the supported DDL statement acquires a metadata lock on the table or schema at a late stage of the operation. The lock_wait_timeout system variable defines the timeout, measured in seconds, to acquire metadata locks. The default value, 3153600, could cause infinite waits and should not be used with the NBO method.

    Attempting a State Snapshot Transfer (SST) fails during the NBO operation.

    To dynamically set the NBO mode in the client, run the following statement:

    SET SESSION wsrep_OSU_method='NBO';\n
    "},{"location":"nbo.html#supported-ddl-statements","title":"Supported DDL statements","text":"

    The NBO method supports the following DDL statements:

    "},{"location":"nbo.html#limitations","title":"Limitations","text":"

    The NBO method does not support the following:

    See the Percona XtraDB Cluster 8.0.25-15.1 Release notes for the latest information.

    "},{"location":"nbo.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"online-schema-upgrade.html","title":"Online schema upgrade","text":"

    Database schemas must change as applications change. For a cluster, the schema upgrade must occur while the system is online. A synchronous cluster requires all active nodes have the same data. Schema updates are performed using Data Definition Language (DDL) statements, such as ALTER TABLE <table_name> DROP COLUMN <column_name>.

    The DDL statements are non-transactional, so these statements use up-front locking to avoid the chance of deadlocks and cannot be rolled back. We recommend that you test your schema changes, especially if you must run an ALTER statement on large tables. Verify the backups before updating the schemas in the production environment. A failure in a schema change can cause your cluster to drop nodes and lose data.

    Percona XtraDB Cluster supports the following methods for making online schema changes:

    Method Name Reason for use Description TOI or Total Order Isolation Consistency is important. Other transactions are blocked while the cluster processes the DDL statements. This is the default method for the wsrep-OSU-method variable. The isolation of the DDL statement guarantees consistency. The DDL replication uses a Statement format. Each node processes the replicated DDL statement at same position in the replication stream. All other writes must wait until the DDL statement is executed. While a DDL statement is running, any long-running transactions in progress and using the same resource receive a deadlock error at commit and are rolled back. The pt-online-schema-change in the Percona Toolkit can alter the table without using locks. There are limitations: only InnoDB tables can be altered, and the wsrep_OSU_method must be TOI. RSU or Rolling Schema Upgrade This method guarantees high availability during the schema upgrades. The node desynchronizes with the cluster and disables flow control during the execution of the DDL statement. The rest of the cluster is not affected. After the statement execution, the node applies delayed events and synchronizes with the cluster. Although the cluster is active, during the process some nodes have the newer schema and some nodes have the older schema. The RSU method is a manual operation. For this method, the gcache must be large enough to store the data for the duration of the DDL change. NBO or Non-Blocking Operation This method is used when consistency is important and uses a more efficient locking strategy. This method is similar to TOI. DDL operations acquire an exclusive metadata lock on the table or schema at a late stage of the operation when updating the table or schema definition. Attempting a State Snapshot Transfer (SST) fails during the NBO operation. This mode uses a more efficient locking strategy and avoids the TOI issue of long-running DDL statements blocking other updates in the cluster."},{"location":"online-schema-upgrade.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"performance-schema-instrumentation.html","title":"Perfomance Schema instrumentation","text":"

    To improve monitoring Percona XtraDB Cluster has implemented an infrastructure to expose Galera instruments (mutexes, cond-variables, files, threads) as a part of PERFORMANCE_SCHEMA.

    Although mutexes and condition variables from wsrep were already part of PERFORMANCE_SCHEMA threads were not.

    Mutexes, condition variables, threads, and files from Galera library also were not part of the PERFORMANCE_SCHEMA.

    You can see the complete list of available instruments by running:

    mysql> SELECT * FROM performance_schema.setup_instruments WHERE name LIKE '%galera%' OR name LIKE '%wsrep%';\n
    Expected output
    +----------------------------------------------------------+---------+-------+\n| NAME                                                     | ENABLED | TIMED |\n+----------------------------------------------------------+---------+-------+\n| wait/synch/mutex/sql/LOCK_wsrep_ready                    | NO      | NO    |\n| wait/synch/mutex/sql/LOCK_wsrep_sst                      | NO      | NO    |\n| wait/synch/mutex/sql/LOCK_wsrep_sst_init                 | NO      | NO    |\n...\n| stage/wsrep/wsrep: in rollback thread                    | NO      | NO    |\n| stage/wsrep/wsrep: aborter idle                          | NO      | NO    |\n| stage/wsrep/wsrep: aborter active                        | NO      | NO    |\n+----------------------------------------------------------+---------+-------+\n73 rows in set (0.00 sec)\n

    Some of the most important are:

    This feature has exposed all the important mutexes, condition variables that lead to lock/threads/files as part of this process.

    Besides exposing file it also tracks write/read bytes like stats for file. These stats are not exposed for Galera files as Galera uses mmap.

    Also, there are some threads that are short-lived and created only when needed especially for SST/IST purpose. They are also tracked but come into PERFORMANCE_SCHEMA tables only if/when they are created.

    Stage Info from Galera specific function which server updates to track state of running thread is also visible in PERFORMANCE_SCHEMA.

    "},{"location":"performance-schema-instrumentation.html#what-is-not-exposed","title":"What is not exposed ?","text":"

    Galera uses customer data-structure in some cases (like STL structures). Mutexes used for protecting these structures which are not part of mainline Galera logic or doesn\u2019t fall in big-picture are not tracked. Same goes with threads that are gcomm library specific.

    Galera maintains a process vector inside each monitor for its internal graph creation. This process vector is 65K in size and there are two such vectors per monitor. That is 128K * 3 = 384K condition variables. These are not tracked to avoid hogging PERFORMANCE_SCHEMA limits and sidelining of the main crucial information.

    "},{"location":"performance-schema-instrumentation.html#use-pxc_cluster_view","title":"Use pxc_cluster_view","text":"

    The pxc_cluster_view - provides a unified view of the cluster. The table is in the Performance_Schema database.

    DESCRIBE pxc_cluster_view;\n

    This table has the following definition:

    Expected output
    +-------------+--------------+------+-----+---------+-------+\n| Field       | Type         | Null | Key | Default | Extra |\n+-------------+--------------+------+-----+---------+-------+\n| HOST_NAME   | char(64)     | NO   |     | NULL    |       |\n| UUID        | char(36)     | NO   |     | NULL    |       |\n| STATUS      | char(64)     | NO   |     | NULL    |       |\n| LOCAL_INDEX | int unsigned | NO   |     | NULL    |       |\n| SEGMENT     | int unsigned | NO   |     | NULL    |       |\n+-------------+--------------+------+-----+---------+-------+\n5 rows in set (0.00 sec)\n

    To view the table, run the following query:

    SELECT * FROM pxc_cluster_view;\n
    Expected output
    +-----------+--------------------------------------+--------+-------------+---------+\n| HOST_NAME | UUID                                 | STATUS | LOCAL_INDEX | SEGMENT |\n+-----------+--------------------------------------+--------+-------------+---------+\n| node1     | 22b9d47e-c215-11eb-81f7-7ed65a9d253b | SYNCED |           0 |       0 |\n| node3     | 29c51cf5-c216-11eb-9101-1ba3a28e377a | SYNCED |           1 |       0 |\n| node2     | 982cdb03-c215-11eb-9865-0ae076a59c5c | SYNCED |           2 |       0 |\n+-----------+--------------------------------------+--------+-------------+---------+\n3 rows in set (0.00 sec)\n
    "},{"location":"performance-schema-instrumentation.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"proxysql-v2.html","title":"ProxySQL admin utilities","text":"

    The ProxySQL and ProxySQL admin utilities documentation provides information on installing and running ProxySQL 1.x.x or ProxySQL 2.x.x with the following ProxySQL admin utilities:

    "},{"location":"proxysql-v2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"quickstart-overview.html","title":"Quickstart Guide for Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster (PXC) is a 100% open source, enterprise-grade, highly available clustering solution for MySQL multi-master setups based on Galera. PXC helps enterprises minimize unexpected downtime and data loss, reduce costs, and improve performance and scalability of your database environments supporting your critical business applications in the most demanding public, private, and hybrid cloud environments.

    "},{"location":"quickstart-overview.html#install-percona-xtradb-cluster","title":"Install Percona XtraDB Cluster","text":"

    You can install Percona XtraDB Cluster using different methods.

    "},{"location":"quickstart-overview.html#for-superior-and-optimized-performance","title":"For superior and optimized performance","text":"

    Percona Server for MySQL (PS) is a freely available, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior and optimized performance, greater scalability and availability, enhanced backups, increased visibility, and instrumentation. Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads.

    Install Percona Server for MySQL.

    "},{"location":"quickstart-overview.html#for-backups-and-restores","title":"For backups and restores","text":"

    Percona XtraBackup (PXB) is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL\u00ae that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems. Maintain fully available applications during planned maintenance windows with Percona XtraBackup.

    Install Percona XtraBackup

    "},{"location":"quickstart-overview.html#for-monitoring-and-management","title":"For Monitoring and Management","text":"

    Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details.

    Install PMM and connect your MySQL instances to it.

    "},{"location":"quickstart-overview.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"restarting-nodes.html","title":"Restart the cluster nodes","text":"

    To restart a cluster node, shut down MySQL and restarting it. The node should leave the cluster (and the total vote count for quorum should decrement).

    When it rejoins, the node should synchronize using IST. If the set of changes needed for IST are not found in the gcache file on any other node in the entire cluster, then SST will be performed instead. Therefore, restarting cluster nodes for rolling configuration changes or software upgrades is rather simple from the cluster\u2019s perspective.

    Note

    If you restart a node with an invalid configuration change that prevents MySQL from loading, Galera will drop the node\u2019s state and force an SST for that node.

    Note

    If MySQL fails for any reason, it will not remove its PID file (which is by design deleted only on clean shutdown). Obviously server will not restart if existing PID file is present. So in case of encountered MySQL failure for any reason with the relevant records in log, PID file should be removed manually.

    "},{"location":"restarting-nodes.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"secure-network.html","title":"Secure the network","text":"

    By default, anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. This could potentially let them query your data or get a complete copy of it.

    In general, it is a good idea to disable all remote connections to Percona XtraDB Cluster nodes. If you require clients or nodes from outside of your network to connect, you can set up a VPN (virtual private network) for this purpose.

    "},{"location":"secure-network.html#firewall-configuration","title":"Firewall configuration","text":"

    A firewall can let you filter Percona XtraDB Cluster traffic based on the clients and nodes that you trust.

    By default, Percona XtraDB Cluster nodes use the following ports:

    Ideally you want to make sure that these ports on each node are accessed only from trusted IP addresses. You can implement packet filtering using iptables, firewalld, pf, or any other firewall of your choice.

    "},{"location":"secure-network.html#use-iptables","title":"Use iptables","text":"

    To restrict access to Percona XtraDB Cluster ports using iptables, you need to append new rules to the INPUT chain on the filter table. In the following example, the trusted range of IP addresses is 192.168.0.1/24. It is assumed that only Percona XtraDB Cluster nodes and clients will connect from these IPs. To enable packet filtering, run the commands as root on each Percona XtraDB Cluster node.

    # iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 3306 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 4444 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 4567 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 4568 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol udp --match udp --dport 4567 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n

    Note

    The last one opens port 4567 for multicast replication over UDP.

    If the trusted IPs are not in sequence, you will need to run these commands for each address on each node. In this case, you can consider to open all ports between trusted hosts. This is a little bit less secure, but reduces the amount of commands. For example, if you have three Percona XtraDB Cluster nodes, you can run the following commands on each one:

    # iptables --append INPUT --protocol tcp \\\n    --source 64.57.102.34 --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n    --source 193.166.3.20  --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n    --source 193.125.4.10  --jump ACCEPT\n

    Running the previous commands will allow TCP connections from the IP addresses of the other Percona XtraDB Cluster nodes.

    Note

    The changes that you make in iptables are not persistent unless you save the packet filtering state:

    # service save iptables\n

    For distributions that use systemd, you need to save the current packet filtering rules to the path where iptables reads from when it starts. This path can vary by distribution, but it is usually in the /etc directory. For example:

    Use iptables-save to update the file:

    # iptables-save > /etc/sysconfig/iptables\n
    "},{"location":"secure-network.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"security-index.html","title":"Security basics","text":"

    By default, Percona XtraDB Cluster does not provide any protection for stored data. There are several considerations to take into account for securing Percona XtraDB Cluster:

    Anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. You should consider restricting access using VPN and filter traffic on ports used by Percona XtraDB Cluster.

    Unencrypted traffic can potentially be viewed by anyone monitoring your network. In Percona XtraDB Cluster 8.0 traffic encryption is enabled by default.

    Percona XtraDB Cluster supports tablespace encryption to provide at-rest encryption for physical tablespace data files.

    For more information, see the following blog post:

      * [MySQL Data at Rest Encryption](https://www.percona.com/blog/2016/04/08/mysql-data-at-rest-encryption/)\n
    "},{"location":"security-index.html#security-modules","title":"Security modules","text":"

    Most modern distributions include special security modules that control access to resources for users and applications. By default, these modules will most likely constrain communication between Percona XtraDB Cluster nodes.

    The easiest solution is to disable or remove such programs, however, this is not recommended for production environments. You should instead create necessary security policies for Percona XtraDB Cluster.

    "},{"location":"security-index.html#selinux","title":"SELinux","text":"

    SELinux is usually enabled by default in Red Hat Enterprise Linux and derivatives (including CentOS). SELinux helps protects the user\u2019s home directory data and provides the following:

    To help with troubleshooting, during installation and configuration, you can set the mode to permissive:

    $ setenforce 0\n

    Note

    This action changes the mode only at runtime.

    See also

    For more information, see Enabling AppArmor

    "},{"location":"security-index.html#apparmor","title":"AppArmor","text":"

    AppArmor is included in Debian and Ubuntu. Percona XtraDB Cluster contains several AppArmor profiles which allows for easier maintenance. To help with troubleshooting, during the installation and configuration, you can set the mode to complain for mysqld.

    See also

    For more information, see Enabling AppArmor

    "},{"location":"security-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"selinux.html","title":"Enable SELinux","text":"

    SELinux helps protects the user\u2019s home directory data. SELinux provides the following:

    For more information, see Percona Server and SELinux

    Red Hat and CentOS distributes a policy module to extend the SELinux policy module for mysqld. We provide the following:

    "},{"location":"selinux.html#modify-policies","title":"Modify policies","text":"

    Modifications described in Percona Server and SELinux can also be applied for Percona XtraDB Cluster.

    To adjust PXC-specific configurations, especially SST/IST ports, use the following procedures as root:

    To enable port 14567 instead of the default port 4567:

    Find the tag associated with the 4567 port:

    $ semanage port -l | grep 4567\ntram_port_t tcp 4567\n

    Run a command to find which rules grant mysqld access to the port:

    $ sesearch -A -s mysqld_t -t tram_port_t -c tcp_socket\nFound 5 semantic av rules:\n    allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n    allow mysqld_t tram_port_t : tcp_socket { name_bind name_connect } ;\n    allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n    allow mysqld_t port_type : tcp_socket name_connect ;\n    allow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n

    You could tag port 14567 with the tramp_port_t tag, but this tag may cause issues because port 14567 is not a TRAM port. Use the general mysqld_port_t tag to add ports. For example, the following command adds port 14567 to the policy module with the mysqld_port_t tag.

    $ semanage port -a -t mysqld_port_t -p tcp 14567\n

    You can verify the addition with the following command:

    $ semanage port -l | grep 14567\nmysqld_port_t                  tcp      4568, 14567, 1186, 3306, 63132-63164\n

    To see the tag associated with the 4444 port, run the following command:

    $ semanage port -l | grep 4444\nkerberos_port_t                tcp      88, 750, 4444\nkerberos_port_t                udp      88, 750, 4444\n

    To find the rules associated with kerberos_port_t, run the following:

    $ sesearch -A -s mysqld_t -t kerberos_port_t -c tcp_socket\nFound 9 semantic av rules:\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t rpc_port_type : tcp_socket name_bind ;\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t port_type : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket { recv_msg send_msg } ;\nallow nsswitch_domain reserved_port_type : tcp_socket name_connect ;\nallow mysqld_t reserved_port_type : tcp_socket name_connect ;\nallow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n

    If you require port 14444 added, use the same method used to add port 14567.

    If you must use a port that is already tagged, you can use either of the following ways:

    "},{"location":"selinux.html#work-with-pxc_encrypt_cluster_traffic","title":"Work with pxc_encrypt_cluster_traffic","text":"

    By default, the pxc_encrypt_cluster_traffic is ON, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.

    Review How to set up the certificates. When SELinux is enabled, mysqld must have access to these certificates. The following items must be checked or considered:

    $ restorecon -v /etc/mysql/certs/*\n
    "},{"location":"selinux.html#enable-enforcing-mode-for-pxc","title":"Enable enforcing mode for PXC","text":"

    The process, mysqld, runs in permissive mode, by default, even if SELinux runs in enforcing mode:

    $ semodule -l | grep permissive\npermissive_mysqld_t\npermissivedomains\n

    After ensuring that the system journal does not list any issues, the administrator can remove the permissive mode for mysqld_t:

    $ semanage permissive -d mysqld_t\n

    See also

    MariaDB 10.2 Galera Cluster with SELinux-enabled on CentOS 7

    "},{"location":"selinux.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"set-up-3nodes-ec2.html","title":"How to set up a three-node cluster in EC2 environment","text":"

    This manual assumes you are running three EC2 instances with Red Hat Enterprise Linux 7 64-bit.

    "},{"location":"set-up-3nodes-ec2.html#recommendations-on-launching-ec2-instances","title":"Recommendations on launching EC2 instances","text":"
    1. Select instance types that support Enhanced Networking functionality. Good network performance critical for synchronous replication used in Percona XtraDB Cluster.

    2. When adding instance storage volumes, choose the ones with good I/O performance:

      • instances with NVMe are preferred

      • GP2 SSD are preferred to GP3 SSD volume types due to I/O latency

      • over sized GP2 SSD are preferred to IO1 volume types due to cost

    3. Attach Elastic network interfaces with static IPs or assign Elastic IP addresses to your instances. Thereby IP addresses are preserved on instances in case of reboot or restart. This is required as each Percona XtraDB Cluster member includes the wsrep_cluster_address option in its configuration which points to other cluster members.

    4. Launch instances in different availability zones to avoid cluster downtime in case one of the zones experiences power loss or network connectivity issues.

      See also

      Amazon EC2 Documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

    To set up Percona XtraDB Cluster:

    1. Remove Percona XtraDB Cluster and Percona Server for MySQL packages for older versions:

      • Percona XtraDB Cluster 5.6, 5.7

      • Percona Server for MySQL 5.6, 5.7

    2. Install Percona XtraDB Cluster as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS.

    3. Create data directories:

      $ mkdir -p /mnt/data\n$ mysql_install_db --datadir=/mnt/data --user=mysql\n
    4. Stop the firewall service:

      $ service iptables stop\n

      Note

      Alternatively, you can keep the firewall running, but open ports 3306, 4444, 4567, 4568. For example to open port 4567 on 192.168.0.1:

      $ iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT\n
    5. Create /etc/my.cnf files:

      Contents of the configuration file on the first node:

      [mysqld]\ndatadir=/mnt/data\nuser=mysql\n\nbinlog_format=ROW\n\nwsrep_provider=/usr/lib64/libgalera_smm.so\nwsrep_cluster_address=gcomm://10.93.46.58,10.93.46.59,10.93.46.60\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node1\n\ninnodb_autoinc_lock_mode=2\n

      For the second and third nodes change the following lines:

      wsrep_node_name=node2\n\nwsrep_node_name=node3\n
    6. Start and bootstrap Percona XtraDB Cluster on the first node:

      [root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
      Expected output
      2014-01-30 11:52:35 23280 [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n
    7. Start the second and third nodes:

      $ sudo systemctl start mysql\n
      Expected output
      ... [Note] WSREP: Flow-control interval: [28, 28]\n... [Note] WSREP: Restored state OPEN -> JOINED (2)\n... [Note] WSREP: Member 2 (percona1) synced with group.\n... [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n... [Note] WSREP: New cluster view: global state: 4827a206-876b-11e3-911c-3e6a77d54953:2, view# 7: Primary, number of nodes: 3, my index: 2, protocol version 2\n... [Note] WSREP: SST complete, seqno: 2\n... [Note] Plugin 'FEDERATED' is disabled.\n... [Note] InnoDB: The InnoDB memory heap is disabled\n... [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins\n... [Note] InnoDB: Compressed tables use zlib 1.2.3\n... [Note] InnoDB: Using Linux native AIO\n... [Note] InnoDB: Not using CPU crc32 instructions\n... [Note] InnoDB: Initializing buffer pool, size = 128.0M\n... [Note] InnoDB: Completed initialization of buffer pool\n... [Note] InnoDB: Highest supported file format is Barracuda.\n... [Note] InnoDB: 128 rollback segment(s) are active.\n... [Note] InnoDB: Waiting for purge to start\n... [Note] InnoDB:  Percona XtraDB (http://www.percona.com) ... started; log sequence number 1626341\n... [Note] RSA private key file not found: /var/lib/mysql//private_key.pem. Some authentication plugins will not work.\n... [Note] RSA public key file not found: /var/lib/mysql//public_key.pem. Some authentication plugins will not work.\n... [Note] Server hostname (bind-address): '*'; port: 3306\n... [Note] IPv6 is available.\n... [Note]   - '::' resolves to '::';\n... [Note] Server socket created on IP: '::'.\n... [Note] Event Scheduler: Loaded 0 events\n... [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n... [Note] WSREP: inited wsrep sidno 1\n... [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.\n... [Note] WSREP: REPL Protocols: 5 (3, 1)\n... [Note] WSREP: Assign initial position for certification: 2, protocol version: 3\n... [Note] WSREP: Service thread queue flushed.\n... [Note] WSREP: Synchronized with group, ready for connections\n

      When all nodes are in SYNCED state, your cluster is ready.

    8. You can try connecting to MySQL on any node and create a database:

      $ mysql -uroot\n> CREATE DATABASE hello_tom;\n
      The new database will be propagated to all nodes.

    "},{"location":"set-up-3nodes-ec2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"singlebox.html","title":"How to set up a three-node cluster on a single box","text":"

    This tutorial describes how to set up a 3-node cluster on a single physical box.

    For the purposes of this tutorial, assume the following:

    To set up the cluster:

    1. Create three MySQL configuration files for the corresponding nodes:

      • /etc/my.4000.cnf
      [mysqld]\nport = 4000\nsocket=/tmp/mysql.4000.sock\ndatadir=/data/bench/d1\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:5030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:4020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:4030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node4000\ninnodb_autoinc_lock_mode=2\n
      • /etc/my.5000.cnf
      [mysqld]\nport = 5000\nsocket=/tmp/mysql.5000.sock\ndatadir=/data/bench/d2\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:5020\nwsrep_node_incoming_address=192.168.2.21\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:5030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node5000\ninnodb_autoinc_lock_mode=2\n
      • /etc/my.6000.cnf
      [mysqld]\nport = 6000\nsocket=/tmp/mysql.6000.sock\ndatadir=/data/bench/d3\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:5030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:6020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:6030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node6000\ninnodb_autoinc_lock_mode=2\n
    2. Create three data directories for the nodes:

      • /data/bench/d1

      • /data/bench/d2

      • /data/bench/d3

    3. Start the first node using the following command (from the Percona XtraDB Cluster install directory):

      $ bin/mysqld_safe --defaults-file=/etc/my.4000.cnf --wsrep-new-cluster\n

      If the node starts correctly, you should see the following output:

      Expected output
      111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)\n111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1\n

      To check the ports, run the following command:

      $ netstat -anp | grep mysqld\ntcp        0      0 192.168.2.21:4030           0.0.0.0:*                   LISTEN      21895/mysqld\ntcp        0      0 0.0.0.0:4000                0.0.0.0:*                   LISTEN      21895/mysqld\n
    4. Start the second and third nodes:

      $ bin/mysqld_safe --defaults-file=/etc/my.5000.cnf\n$ bin/mysqld_safe --defaults-file=/etc/my.6000.cnf\n

      If the nodes start and join the cluster successful, you should see the following output:

      111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)\n111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections\n

      To check the cluster size, run the following command:

      $ mysql -h127.0.0.1 -P6000 -e \"show global status like 'wsrep_cluster_size';\"\n
      Expected output
      +--------------------+-------+\n| Variable_name      | Value |\n+--------------------+-------+\n| wsrep_cluster_size | 3     |\n+--------------------+-------+\n

      After that you can connect to any node and perform queries, which will be automatically synchronized with other nodes. For example, to create a database on the second node, you can run the following command:

      $ mysql -h127.0.0.1 -P5000 -e \"CREATE DATABASE hello_peter\"\n
    "},{"location":"singlebox.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"state-snapshot-transfer.html","title":"State snapshot transfer","text":"

    State Snapshot Transfer (SST) is a full data copy from one node (donor) to the joining node (joiner). It\u2019s used when a new node joins the cluster. In order to be synchronized with the cluster, the new node has to receive data from a node that is already part of the cluster.

    Percona XtraDB Cluster enables via xtrabackup.

    Xtrabackup SST uses backup locks, which means the Galera provider is not paused at all as with earlier. The SST method can be configured using the wsrep_sst_method variable.

    Note

    If the gcs.sync_donor variable is set to Yes (default is No), the whole cluster will get blocked if the donor is blocked by SST.

    "},{"location":"state-snapshot-transfer.html#choose-the-sst-donor","title":"Choose the SST Donor","text":"

    If there are no nodes available that can safely perform incremental state transfer (IST), the cluster defaults to SST.

    If there are nodes available that can perform IST, the cluster prefers a local node over remote nodes to serve as the donor.

    If there are no local nodes available that can perform IST, the cluster chooses a remote node to serve as the donor.

    If there are several local and remote nodes that can perform IST, the cluster chooses the node with the highest seqno to serve as the donor.

    "},{"location":"state-snapshot-transfer.html#use-percona-xtrabackup","title":"Use Percona Xtrabackup","text":"

    The default SST method is xtrabackup-v2 which uses Percona XtraBackup. This is the least blocking method that leverages backup locks. XtraBackup is run locally on the donor node.

    The datadir needs to be specified in the server configuration file my.cnf, otherwise the transfer process will fail.

    Detailed information on this method is provided in Percona XtraBackup SST Configuration documentation.

    "},{"location":"state-snapshot-transfer.html#sst-for-tables-with-tablespaces-that-are-not-in-the-data-directory","title":"SST for tables with tablespaces that are not in the data directory","text":"

    For example:

    CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory';\n
    "},{"location":"state-snapshot-transfer.html#sst-using-percona-xtrabackup","title":"SST using Percona XtraBackup","text":"

    XtraBackup will restore the table to the same location on the joiner node. If the target directory does not exist, it will be created. If the target file already exists, an error will be returned, because XtraBackup cannot clear tablespaces not in the data directory.

    "},{"location":"state-snapshot-transfer.html#other-reading","title":"Other reading","text":""},{"location":"state-snapshot-transfer.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"strict-mode.html","title":"Percona XtraDB Cluster strict mode","text":"

    The Percona XtraDB Cluster (PXC) Strict Mode is designed to avoid the use of tech preview features and unsupported features in PXC. This mode performs a number of validations at startup and during runtime.

    Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:

    By default, PXC Strict Mode is set to ENFORCING, except if the node is acting as a standalone server or the node is bootstrapping, then PXC Strict Mode defaults to DISABLED.

    It is recommended to keep PXC Strict Mode set to ENFORCING, because in this case whenever Percona XtraDB Cluster encounters a tech preview feature or an unsupported operation, the server will deny it. This will force you to re-evaluate your Percona XtraDB Cluster configuration without risking the consistency of your data.

    If you are planning to set PXC Strict Mode to anything else than ENFORCING, you should be aware of the limitations and effects that this may have on data integrity. For more information, see Validations.

    To set the mode, use the pxc_strict_mode variable in the configuration file or the --pxc-strict-mode option during mysqld startup.

    Note

    It is better to start the server with the necessary mode (the default ENFORCING is highly recommended). However, you can dynamically change it during runtime. For example, to set PXC Strict Mode to PERMISSIVE, run the following command:

    mysql> SET GLOBAL pxc_strict_mode=PERMISSIVE;\n

    Note

    To further ensure data consistency, it is important to have all nodes in the cluster running with the same configuration, including the value of pxc_strict_mode variable.

    "},{"location":"strict-mode.html#validations","title":"Validations","text":"

    PXC Strict Mode validations are designed to ensure optimal operation for common cluster setups that do not require tech preview features and do not rely on operations not supported by Percona XtraDB Cluster.

    Warning

    If an unsupported operation is performed on a node with pxc_strict_mode set to DISABLED or PERMISSIVE, it will not be validated on nodes where it is replicated to, even if the destination node has pxc_strict_mode set to ENFORCING.

    This section describes the purpose and consequences of each validation.

    "},{"location":"strict-mode.html#group-replication","title":"Group replication","text":"

    Group replication is a feature of MySQL that provides distributed state machine replication with strong coordination between servers. It is implemented as a plugin which, if activated, may conflict with PXC. Group replication cannot be activated to run alongside PXC. However, you can migrate to PXC from the environment that uses group replication.

    For the strict mode to work correctly, make sure that the group replication plugin is not active. In fact, if pxc_strict_mode is set to ENFORCING or MASTER, the server will stop with an error:

    Error message with pxc_strict_mode set to ENFORCING or MASTER

    The error message
    Group replication cannot be used with PXC in strict mode.\n

    If pxc_strict_mode is set to DISABLED you can use group replication at your own risk. Setting pxc_strict_mode to PERMISSIVE will result in a warning.

    Warning message with pxc_strict_mode set to PERMISSIVE

    Warning message
    Using group replication with PXC is only supported for migration. Please\nmake sure that group replication is turned off once all data is migrated to PXC.\n
    "},{"location":"strict-mode.html#storage-engine","title":"Storage engine","text":"

    Percona XtraDB Cluster currently supports replication only for tables that use a transactional storage engine (XtraDB or InnoDB). To ensure data consistency, the following statements should not be allowed for tables that use a non-transactional storage engine (MyISAM, MEMORY, CSV, and others):

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on an unsupported table.

    ENFORCING or MASTER

    At startup, no validation is performed.

    At runtime, any undesirable operation performed on an unsupported table is denied and an error is logged.

    Note

    Unsupported tables can be converted to use a supported storage engine.

    "},{"location":"strict-mode.html#myisam-replication","title":"MyISAM replication","text":"

    Percona XtraDB Cluster provides support for replication of tables that use the MyISAM storage engine. The use of the MyISAM storage engine in a cluster is not recommended and if you use the storage engine, this is your own risk. Due to the non-transactional nature of MyISAM, the storage engine is not fully-supported in Percona XtraDB Cluster.

    MyISAM replication is controlled using the wsrep_replicate_myisam variable, which is set to OFF by default. Due to its unreliability, MyISAM replication should not be enabled if you want to ensure data consistency.

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, you can set wsrep_replicate_myisam to any value.

    PERMISSIVE

    At startup, if wsrep_replicate_myisam is set to ON, a warning is logged and startup continues.

    At runtime, it is permitted to change wsrep_replicate_myisam to any value, but if you set it to ON, a warning is logged.

    ENFORCING or MASTER

    At startup, if wsrep_replicate_myisam is set to ON, an error is logged and startup is aborted.

    At runtime, any attempt to change wsrep_replicate_myisam to ON fails and an error is logged.

    Note

    The wsrep_replicate_myisam variable controls replication for MyISAM tables, and this validation only checks whether it is allowed. Undesirable operations for MyISAM tables are restricted using the Storage engine validation.

    "},{"location":"strict-mode.html#binary-log-format","title":"Binary log format","text":"

    Percona XtraDB Cluster supports only the default row-based binary logging format. In 8.0, setting the binlog_format variable to anything but ROW at startup or runtime is not allowed regardless of the value of the pxc_strict_mode variable.

    "},{"location":"strict-mode.html#tables-without-primary-keys","title":"Tables without primary keys","text":"

    Percona XtraDB Cluster cannot properly propagate certain write operations to tables that do not have primary keys defined. Undesirable operations include data manipulation statements that perform writing to table (especially DELETE).

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on a table without an explicit primary key defined.

    ENFORCING or MASTER

    At startup, no validation is performed.

    At runtime, any undesirable operation performed on a table without an explicit primary key is denied and an error is logged.

    "},{"location":"strict-mode.html#log-output","title":"Log output","text":"

    Percona XtraDB Cluster does not support tables in the MySQL database as the destination for log output. By default, log entries are written to file. This validation checks the value of the log_output variable.

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, you can set log_output to any value.

    PERMISSIVE

    At startup, if log_output is set only to TABLE, a warning is logged and startup continues.

    At runtime, it is permitted to change log_output to any value, but if you set it only to TABLE, a warning is logged.

    ENFORCING or MASTER

    At startup, if log_output is set only to TABLE, an error is logged and startup is aborted.

    At runtime, any attempt to change log_output only to TABLE fails and an error is logged.

    "},{"location":"strict-mode.html#explicit-table-locking","title":"Explicit table locking","text":"

    Percona XtraDB Cluster provides only the tech-preview-level of support for explicit table locking operations, The following undesirable operations lead to explicit table locking and are covered by this validation:

    Depending on the selected mode, the following happens:

    DISABLED or MASTER

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed.

    ENFORCING

    At startup, no validation is performed.

    At runtime, any undesirable operation is denied and an error is logged.

    "},{"location":"strict-mode.html#auto-increment-lock-mode","title":"Auto-increment lock mode","text":"

    The lock mode for generating auto-increment values must be interleaved to ensure that each node generates a unique (but non-sequential) identifier.

    This validation checks the value of the innodb_autoinc_lock_mode variable. By default, the variable is set to 1 (consecutive lock mode), but it should be set to 2 (interleaved lock mode).

    Depending on the strict mode selected, the following happens:

    DISABLED

    At startup, no validation is performed.

    PERMISSIVE

    At startup, if innodb_autoinc_lock_mode is not set to 2, a warning is logged and startup continues.

    ENFORCING or MASTER

    At startup, if innodb_autoinc_lock_mode is not set to 2, an error is logged and startup is aborted.

    Note

    This validation is not performed during runtime, because the innodb_autoinc_lock_mode variable cannot be set dynamically.

    "},{"location":"strict-mode.html#combine-schema-and-data-changes-in-a-single-statement","title":"Combine schema and data changes in a single statement","text":"

    With strict mode set to ENFORCING, Percona XtraDB Cluster does not support statements, because they combine both schema and data changes. Note that tables in the SELECT clause should be present on all replication nodes.

    With strict mode set to PERMISSIVE or DISABLED, CREATE TABLE \u2026 AS SELECT (CTAS) statements are replicated using the method to ensure consistency.

    In Percona XtraDB Cluster 5.7, CREATE TABLE \u2026 AS SELECT (CTAS) statements were replicated using DML write-sets when strict mode was set to PERMISSIVE or DISABLED.

    Important

    MyISAM tables are created and loaded even if wsrep_replicate_myisam equals to 1. Percona XtraDB Cluster does not recommend using the MyISAM storage engine. The support for MyISAM may be removed in a future release.

    See also

    MySQL Bug System: XID inconsistency on master-slave with CTAS https://bugs.mysql.com/bug.php?id=93948

    Depending on the strict mode selected, the following happens:

    Mode Behavior DISABLED At startup, no validation is performed. At runtime, all operations are permitted. PERMISSIVE At startup, no validation is performed. At runtime, all operations are permitted, but a warning is logged when a CREATE TABLE \u2026 AS SELECT (CTAS) operation is performed. ENFORCING At startup, no validation is performed. At runtime, any CTAS operation is denied and an error is logged.

    Important

    Although CREATE TABLE \u2026 AS SELECT (CTAS) operations for temporary tables are permitted even in STRICT mode, temporary tables should not be used as source tables in CREATE TABLE \u2026 AS SELECT (CTAS) operations due to the fact that temporary tables are not present on all nodes.

    If node-1 has a temporary and a non-temporary table with the same name, CREATE TABLE \u2026 AS SELECT (CTAS) on node-1 will use temporary and CREATE TABLE \u2026 AS SELECT (CTAS) on node-2 will use the non-temporary table resulting in a data level inconsistency.

    "},{"location":"strict-mode.html#discard-and-import-tablespaces","title":"Discard and import tablespaces","text":"

    DISCARD TABLESPACE and IMPORT TABLESPACE are not replicated using TOI. This can lead to data inconsistency if executed on only one node.

    Depending on the strict mode selected, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when you discard or import a tablespace.

    ENFORCING

    At startup, no validation is performed.

    At runtime, discarding or importing a tablespace is denied and an error is logged.

    "},{"location":"strict-mode.html#major-version-check","title":"Major version check","text":"

    This validation checks that the protocol version is the same as the server major version. This validation protects the cluster against writes attempted on already upgraded nodes.

    Expected output
    ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of multiple major versions while accepting write workload with pxc_strict_mode = ENFORCING or MASTER\n
    "},{"location":"strict-mode.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"tarball.html","title":"Install Percona XtraDB Cluster from Binary Tarball","text":"

    Percona provides generic tarballs with all required files and binaries for manual installation.

    You can download the appropriate tarball package from https://www.percona.com/downloads/Percona-XtraDB-Cluster-80

    "},{"location":"tarball.html#version-updates","title":"Version updates","text":"

    Starting with Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section lists only full or minimal tar files. Each tarball file replaces the multiple tar file listing used in earlier versions and supports all distributions.

    Important

    Starting with Percona XtraDB Cluster 8.0.21, Percona does not provide a tarball for RHEL 6/CentOS 6 (glibc2.12).

    The version number in the tarball name must be substituted with the appropriate version number for your system. To indicate that such a substitution is needed in statements, we use <version-number>.

    Name Type Description Percona-XtraDB-Cluster_-Linux.x86_64.glibc2.17.tar.gz Full Contains binary files, libraries, test files, and debug symbols Percona-XtraDB-Cluster_-Linux.x86_64.glibc2.17.minimal.tar.gz Minimal Contains binary files and libraries but does not include test files, or debug symbols

    For installations before Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section contains multiple tarballs based on the operating system names:

    Percona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.bionic.tar.gz\nPercona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.buster.tar.gz\n...\n

    For example, you can use curl as follows:

    $ curl -O https://downloads.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-8.0.27/binary/tarball/Percona-XtraDB-Cluster_8.0.27-18.1_Linux.x86_64.glibc2.17-minimal.tar.gz\n

    Check your system to make sure the packages that the PXC version requires are installed.

    "},{"location":"tarball.html#for-debian-or-ubuntu","title":"For Debian or Ubuntu:","text":"
    $ sudo apt-get install -y \\\nsocat libdbd-mysql-perl \\\nlibaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \\\nlibgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1\n
    "},{"location":"tarball.html#for-red-hat-enterprise-linux-or-centos","title":"For Red Hat Enterprise Linux or CentOS:","text":"
    $ sudo yum install -y openssl socat  \\\nprocps-ng chkconfig procps-ng coreutils shadow-utils \\\n
    "},{"location":"tarball.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"telemetry.html","title":"Telemetry on Percona XtraDB Cluster","text":"

    Percona telemetry fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer to not share this information.

    "},{"location":"telemetry.html#what-information-is-collected","title":"What information is collected","text":"

    At this time, telemetry is added only to the Percona packages and Docker images. Percona XtraDB Cluster collects only information about the installation environment. Future releases may add additional metrics.

    Be assured that access to this raw data is rigorously controlled. Percona does not collect personal data. All data is anonymous and cannot be traced to a specific user. To learn more about our privacy practices, read our Percona Privacy statement.

    An example of the data collected is the following:

    [{\"id\" : \"c416c3ee-48cd-471c-9733-37c2886f8231\",\n\"product_family\" : \"PRODUCT_FAMILY_PXC\",\n\"instanceId\" : \"6aef422e-56a7-4530-af9d-94cc02198343\",\n\"createTime\" : \"2023-10-16T10:46:23Z\",\n\"metrics\":\n[{\"key\" : \"deployment\",\"value\" : \"PACKAGE\"},\n{\"key\" : \"pillar_version\",\"value\" : \"8.0.34-26\"},\n{\"key\" : \"OS\",\"value\" : \"Oracle Linux Server 8.8\"},\n{\"key\" : \"hardware_arch\",\"value\" : \"x86_64 x86_64\"}]}]\n
    "},{"location":"telemetry.html#disable-telemetry","title":"Disable telemetry","text":"

    Starting with Percona XtraDB Cluster 8.0.34-26-1, telemetry is enabled by default. If you decide not to send usage data to Percona, you can set the PERCONA_TELEMETRY_DISABLE=1 environment variable for either the root user or in the operating system prior to the installation process.

    Debian-derived distributionRed Hat-derived distributionDOCKER

    Add the environment variable before the install process.

    $ sudo PERCONA_TELEMETRY_DISABLE=1 apt install percona-xtradb-cluster\n

    Add the environment variable before the install process.

    $ sudo PERCONA_TELEMETRY_DISABLE=1 yum install percona-xtradb-cluster\n

    Add the environment variable when running a command in a new container.

    $ docker run -d -e MYSQL_ROOT_PASSWORD=test1234# -e PERCONA_TELEMETRY_DISABLE=1 -e CLUSTER_NAME=pxc-cluster1 --name=pxc-node1 percona/percona-xtradb-cluster:8.0\n
    "},{"location":"telemetry.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"threading-model.html","title":"Percona XtraDB Cluster threading model","text":"

    Percona XtraDB Cluster creates a set of threads to service its operations, which are not related to existing MySQL threads. There are three main groups of threads:

    "},{"location":"threading-model.html#applier-threads","title":"Applier threads","text":"

    Applier threads apply write-sets that the node receives from other nodes. Write messages are directed through gcv_recv_thread.

    The number of applier threads is controlled using the wsrep_slave_threads variable or the wsrep_applier_threads variable. The wsrep_slave_threads variable was deprecated in the Percona XtraDB Cluster 8.0.26-16 release. The default value is 1, which means at least one wsrep applier thread exists to process the request.

    Applier threads wait for an event, and once it gets the event, it applies it using normal replica apply routine path, and relays the log info apply path with wsrep-customization. These threads are similar to replica worker threads (but not exactly the same).

    Coordination is achieved using Apply and Commit Monitor. A transaction passes through two important states: APPLY and COMMIT. Every transaction registers itself with an apply monitor, where its apply order is defined. So all transactions with apply order sequence number (seqno) of less than this transaction\u2019s sequence number, are applied before applying this transaction. The same is done for commit as well (last_left >= trx_.depends_seqno()).

    "},{"location":"threading-model.html#rollback-thread","title":"Rollback thread","text":"

    There is only one rollback thread to perform rollbacks in case of conflicts.

    All the transactions that need to be rolled back are added to the rollback queue, and the rollback thread is notified. The rollback thread then iterates over the queue and performs rollback operations.

    If a transaction is active on a node, and a node receives a transaction write-set from the cluster group that conflicts with the local active transaction, then such local transactions are always treated as a victim transaction to roll back.

    Transactions can be in a commit state or an execution stage when the conflict arises. Local transactions in the execution stage are forcibly killed so that the waiting applier transaction is allowed to proceed. Local transactions in the commit stage fail with a certification error.

    "},{"location":"threading-model.html#other-threads","title":"Other threads","text":""},{"location":"threading-model.html#service-thread","title":"Service thread","text":"

    This thread is created during boot-up and used to perform auxiliary services. It has two main functions:

    "},{"location":"threading-model.html#receiving-thread","title":"Receiving thread","text":"

    The gcs_recv_thread thread is the first one to see all the messages received in a group.

    It will try to assign actions against each message it receives. It adds these messages to a central FIFO queue, which are then processed by the Applier threads. Messages can include different operations like state change, configuration update, flow-control, and so on.

    One important action is processing a write-set, which actually is applying transactions to database objects.

    "},{"location":"threading-model.html#gcomm-connection-thread","title":"Gcomm connection thread","text":"

    The gcomm connection thread GCommConn::run_fn is used to co-ordinate the low-level group communication activity. Think of it as a black box meant for communication.

    "},{"location":"threading-model.html#action-based-threads","title":"Action-based threads","text":"

    Besides the above, some threads are created on a needed basis. SST creates threads for donor and joiner (which eventually forks out a child process to host the needed SST script), IST creates receiver and async sender threads, PageStore creates a background thread for removing the files that were created.

    If the checksum is enabled and the replicated write-set is big enough, the checksum is done as part of a separate thread.

    "},{"location":"threading-model.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"trademark-policy.html","title":"Trademark policy","text":"

    This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.

    Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.

    Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.

    First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.

    Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.

    Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.

    Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.

    Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.

    In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.

    In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.

    "},{"location":"trademark-policy.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"upgrade-from-backup.html","title":"Restore a 5.7 backup to an 8.0 cluster","text":"

    Use Percona XtraBackup to back up the source server data and restore the data to a target server, and then upgrade the server to a different version of Percona XtraDB Cluster.

    Downgrading is not supported.

    "},{"location":"upgrade-from-backup.html#restore-a-database-with-a-different-server-version","title":"Restore a database with a different server version","text":"

    Review Upgrade Percona XtraDB cluster.

    Upgrade the nodes one at a time. The primary node should be the last node to be upgraded. The following steps are required on each node.

    1. Back up the data on the source server.

    2. Install the same database version as the source server on the target server.

    3. Restore with a copy-back operation on the target server.

    4. Start the database server on the target server.

    5. Do a slow shutdown of the database server with the SET GLOBAL innodb_fast_shutdown=0 statement. This shutdown type flushes InnoDB operations before completing and may take longer.

    6. Install the new database server version on the target server.

    7. Start the new database server version on the restored data directory.

    8. Perform any other upgrade steps as necessary.

    To ensure the upgrade was successful, check the data.

    "},{"location":"upgrade-from-backup.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"upgrade-guide.html","title":"Upgrade Percona XtraDB Cluster","text":"

    The following documents contain details about relevant changes in the 8.0 series of MySQL and Percona Server for MySQL. Make sure you deal with any incompatible features and variables mentioned in these documents when upgrading to Percona XtraDB Cluster 8.0.

    "},{"location":"upgrade-guide.html#important-changes-in-percona-xtradb-cluster-80","title":"Important changes in Percona XtraDB Cluster 8.0","text":""},{"location":"upgrade-guide.html#traffic-encryption-is-enabled-by-default","title":"Traffic encryption is enabled by default","text":"

    The pxc_encrypt_cluster_traffic variable, which enables traffic encryption, is set to ON by default in Percona XtraDB Cluster 8.0.

    Unless you configure a node accordingly (each node in your cluster must use the same SSL certificates) or try to join a cluster running PXC 5.7 which unencrypted cluster traffic, the node will not be able to join resulting in an error.

    The error message
    ... [ERROR] ... [Galera] handshake with remote endpoint ...\nThis error is often caused by SSL issues. ...\n

    See also

    sections Encrypting PXC Traffic, Configuring Nodes for Write-Set Replication

    "},{"location":"upgrade-guide.html#not-recommended-to-mix-pxc-57-nodes-with-pxc-80-nodes","title":"Not recommended to mix PXC 5.7 nodes with PXC 8.0 nodes","text":"

    Shut down the cluster and upgrade each node to PXC 8.0. It is important that you make backups before attempting an upgrade.

    "},{"location":"upgrade-guide.html#pxc-strict-mode-is-enabled-by-default","title":"PXC strict mode is enabled by default","text":"

    Percona XtraDB Cluster in 8.0 runs with PXC Strict Mode enabled by default. This will deny any unsupported operations and may halt the server if a strict mode validation fails. It is recommended to first start the node with the pxc_strict_mode variable set to PERMISSIVE in the MySQL configuration file.

    All configuration settings are stored in the default MySQL configuration file:

    After you check the log for any tech preview features or unsupported features and you have fixed any of the encountered incompatibilities, set the variable back to ENFORCING at run time:

    mysql> SET pxc_strict_mode=ENFORCING;\n

    Restarting the node with the updated configuration file also sets variable to ENFORCING.

    "},{"location":"upgrade-guide.html#the-configuration-file-layout-has-changed-in-pxc-80","title":"The configuration file layout has changed in PXC 8.0","text":"

    All configuration settings are stored in the default MySQL configuration file:

    Before you start the upgrade, move your custom settings from /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf (on Debian and Ubuntu) or from /etc/percona-xtradb-cluster.conf.d/wsrep.cnf (on Red Hat and CentOS) to the new location accordingly.

    Note

    If you have moved your my.cnf file to a different location and added a symlink to /etc/my.cnf, the RPM package manager, when upgrading, can delete the symlink and put a default my.cnf file in /etc/.

    "},{"location":"upgrade-guide.html#caching_sha2_password-is-the-default-authentication-plugin","title":"caching_sha2_password is the default authentication plugin","text":"

    In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password. The ProxySQL option \u2013syncusers will not work if the Percona XtraDB Cluster user is created using caching_sha2_password. Use the mysql_native_password authentication plugin in these cases.

    Be sure you are running on the latest 5.7 version before you upgrade to 8.0.

    "},{"location":"upgrade-guide.html#mysql_upgrade-is-part-of-sst","title":"mysql_upgrade is part of SST","text":"

    mysql_upgrade is now run automatically as part of SST. You do not have to run it manually when upgrading your system from an older version.

    "},{"location":"upgrade-guide.html#major-upgrade-scenarios","title":"Major upgrade scenarios","text":"

    Upgrading PXC from 5.7 to 8.0 may have slightly different strategies depending on the configuration and workload on your PXC cluster.

    Note that the new default value of pxc-encrypt-cluster-traffic (set to ON versus OFF in PXC 5.7) requires additional care. You cannot join a 5.7 node to a PXC 8.0 cluster unless the node has traffic encryption enabled as the cluster may not have some nodes with traffic encryption enabled and some nodes with traffic encryption disabled. For more information, see Traffic encryption is enabled by default.

    "},{"location":"upgrade-guide.html#scenario-no-active-parallel-workload-or-with-read-only-workload","title":"Scenario: No active parallel workload or with read-only workload","text":"

    If there is no active parallel workload or the cluster has read-only workload while upgrading the nodes, complete the following procedure for each node in the cluster:

    1. Shutdown one of the node 5.7 cluster nodes.

    2. Remove 5.7 PXC packages without removing the data-directory.

    3. Install PXC 8.0 packages.

    4. Restart the mysqld service.

    Important

    Before upgrading, make sure your application can work with a reduced cluster size. If the cluster operates with an even number of nodes, the cluster may have split-brain.

    This upgrade flow auto-detects the presence of the 5.7 data directory and trigger the upgrade as part of the node bootup process. The data directory is upgraded to be compatible with PXC 8.0. Then the node joins the cluster and enters synced state. The 3-node cluster is restored with 2 nodes running PXC 5.7 and 1 node running PXC 8.0.

    Note

    Since SST is not involved, SST based auto-upgrade flow is not started.

    PXC 8.0 uses Galera 4 while PXC 5.7 uses Galera-3. The cluster will continue to use the protocol version 3 used in Galera 3 effectively limiting some of the functionality. With all nodes upgraded to version 8.0, protocol version 4 is applied.

    Tip

    The protocol version is stored in the protocol_version column of the wsrep_cluster table.

    mysql> USE mysql;\n
    mysql> SELECT protocol_version from wsrep_cluster;\n

    The example of the output is the following:

    +------------------+\n| protocol_version |\n+------------------+\n|                4 |\n+------------------+\n1 row in set (0.00 sec)\n

    As soon as the last 5.7 node shuts down, the configuration of the remaining two nodes is updated to use protocol version 4. A new upgraded node will then join using protocol version 4 and the whole cluster will maintain protocol version 4 enabling the support for additional Galera 4 facilities.

    It may take longer to join the last upgraded node since it will invite IST to obtain the configuration changes.

    Note

    Starting from Galera 4, the configuration changes are cached to gcache and the configuration changes are donated as part of IST or SST to help build the certification queue on the JOINING node. As other nodes (say n2 and n3), already using protocol version 4, donate the configuration changes when the JOINER node is booted.

    The situation was different for the previous and penultimate nodes since the donation of the configuration changes is not supported by protocol version 3 that they used.

    With IST involved on joining the last node, the smart IST flow is triggered to take care of the upgrade even before MySQL starts to look at the data directory.

    Important

    It is not recommended to restart the last node without upgrading it.

    "},{"location":"upgrade-guide.html#scenario-upgrade-from-pxc-56-to-pxc-80","title":"Scenario: Upgrade from PXC 5.6 to PXC 8.0","text":"

    First, upgrade PXC from 5.6 to the latest version of PXC 5.7. Then proceed with the upgrade using the procedure described in Scenario: No active parallel workload or with read-only workload.

    "},{"location":"upgrade-guide.html#minor-upgrade","title":"Minor upgrade","text":"

    To upgrade the cluster, follow these steps for each node:

    1. Make sure that all nodes are synchronized.

    2. Stop the mysql service:

      $ sudo service mysql stop\n
    3. Upgrade Percona XtraDB Cluster and Percona XtraBackup packages. For more information, see Installing Percona XtraDB Cluster.

    4. Back up grastate.dat, so that you can restore it if it is corrupted or zeroed out due to network issue.

    5. Now, start the cluster node with 8.0 packages installed, PXC will upgrade the data directory as needed - either as part of the startup process or a state transfer (IST/SST).

      In most cases, starting the mysql service should run the node with your previous configuration. For more information, see Adding Nodes to Cluster.

      $ sudo service mysql start\n

      Note

      On CentOS, the /etc/my.cnf configuration file is renamed to my.cnf.rpmsave. Make sure to rename it back before joining the upgraded node back to the cluster.

      PXC Strict Mode is enabled by default, which may result in denying any unsupported operations and may halt the server. For more information, see pxc-strict-mode is enabled by default.

      pxc-encrypt-cluster-traffic is enabled by default. You need to configure each node accordingly and avoid joining a cluster with unencrypted cluster traffic. For more information, see Traffic encryption is enabled by default.

    6. Repeat this procedure for the next node in the cluster until you upgrade all nodes.

    "},{"location":"upgrade-guide.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"verify-replication.html","title":"Verify replication","text":"

    Use the following procedure to verify replication by creating a new database on the second node, creating a table for that database on the third node, and adding some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@pxc2> CREATE DATABASE percona;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Switch to a newly created database:

      mysql@pxc3> USE percona;\n

      The following output confirms that a database has been changed:

      Expected output
      Database changed\n
    3. Create a table on the third node:

      mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)\n
    4. Insert records on the first node:

      mysql@pxc1> INSERT INTO percona.example VALUES (1, 'percona1');\n

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)\n
    5. Retrieve rows from that table on the second node:

      mysql@pxc2> SELECT * FROM percona.example;\n

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n|       1 | percona1  |\n+---------+-----------+\n1 row in set (0.00 sec)\n
    "},{"location":"verify-replication.html#next-steps","title":"Next steps","text":""},{"location":"verify-replication.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"virtual-sandbox.html","title":"Set up a testing environment with ProxySQL","text":"

    This section describes how to set up Percona XtraDB Cluster in a virtualized testing environment based on ProxySQL. To test the cluster, we will use the sysbench benchmark tool.

    It is assumed that each PXC node is installed on Amazon EC2 micro instances running CentOS 7. However, the information in this section should apply if you used another virtualization technology (for example, VirtualBox) with any Linux distribution.

    Each of the tree Percona XtraDB Cluster nodes is installed on a separate virtual machine. One more virtual machine has ProxySQL, which redirects requests to the nodes.

    Tip

    Running ProxySQL on an application server, instead of having it as a dedicated entity, removes the unnecessary extra network roundtrip, because the load balancing layer in Percona XtraDB Cluster scales well with application servers.

    1. Install Percona XtraDB Cluster on three cluster nodes, as described in Configuring Percona XtraDB Cluster on CentOS.

    2. On the client node, install ProxySQL and sysbench:

      $ yum -y install proxysql2 sysbench\n
    3. When all cluster nodes are started, configure ProxySQL using the admin interface.

      Tip

      To connect to the ProxySQL admin interface, you need a mysql client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally.

      To connect to the admin interface, use the credentials, host name and port specified in the global variables.

      Warning

      Do not use default credentials in production!

      The following example shows how to connect to the ProxySQL admin interface with default credentials (assuming that ProxySQL IP is 192.168.70.74):

      root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
      Expected output
      Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n

      To see the ProxySQL databases and tables use the SHOW DATABASES and SHOW TABLES commands:

      mysql> SHOW DATABASES;\n

      The following output shows the list of the ProxySQL databases:

      Expected output
      +-----+---------------+-------------------------------------+\n| seq | name          | file                                |\n+-----+---------------+-------------------------------------+\n| 0   | main          |                                     |\n| 2   | disk          | /var/lib/proxysql/proxysql.db       |\n| 3   | stats         |                                     |\n| 4   | monitor       |                                     |\n| 5   | stats_monitor | /var/lib/proxysql/proxysql_stats.db |\n+-----+---------------+-------------------------------------+\n5 rows in set (0.00 sec)\n
      mysql> SHOW TABLES;\n

      The following output shows the list of tables:

      Expected output
      +----------------------------------------------------+\n| tables                                             |\n+----------------------------------------------------+\n| global_variables                                   |\n| mysql_aws_aurora_hostgroups                        |\n| mysql_collations                                   |\n| mysql_firewall_whitelist_rules                     |\n| mysql_firewall_whitelist_sqli_fingerprints         |\n| mysql_firewall_whitelist_users                     |\n| mysql_galera_hostgroups                            |\n| mysql_group_replication_hostgroups                 |\n| mysql_query_rules                                  |\n| mysql_query_rules_fast_routing                     |\n| mysql_replication_hostgroups                       |\n| mysql_servers                                      |\n| mysql_users                                        |\n| proxysql_servers                                   |\n| restapi_routes                                     |\n| runtime_checksums_values                           |\n| runtime_global_variables                           |\n| runtime_mysql_aws_aurora_hostgroups                |\n| runtime_mysql_firewall_whitelist_rules             |\n| runtime_mysql_firewall_whitelist_sqli_fingerprints |\n| runtime_mysql_firewall_whitelist_users             |\n| runtime_mysql_galera_hostgroups                    |\n| runtime_mysql_group_replication_hostgroups         |\n| runtime_mysql_query_rules                          |\n| runtime_mysql_query_rules_fast_routing             |\n| runtime_mysql_replication_hostgroups               |\n| runtime_mysql_servers                              |\n| runtime_mysql_users                                |\n| runtime_proxysql_servers                           |\n| runtime_restapi_routes                             |\n| runtime_scheduler                                  |\n| scheduler                                          |\n+----------------------------------------------------+\n32 rows in set (0.00 sec)\n

      For more information about admin databases and tables, see Admin Tables

      Note

      ProxySQL has 3 areas where the configuration can reside:

      • MEMORY (your current working place)

      • RUNTIME (the production settings)

      • DISK (durable configuration, saved inside an SQLITE database)

      When you change a parameter, you change it in MEMORY area. That is done by design to allow you to test the changes before pushing to production (RUNTIME), or saving them to disk.

    "},{"location":"virtual-sandbox.html#adding-cluster-nodes-to-proxysql","title":"Adding cluster nodes to ProxySQL","text":"

    To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.

    INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.71',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.72',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.73',10,3306,1000);\n

    ProxySQL v2.0 supports PXC natlively. It uses the concept of hostgroups (see the value of hostgroup_id in the mysql_servers table) to group cluster nodes to balance the load in a cluster by routing different types of traffic to different groups.

    This information is stored in the [runtime_]mysql_galera_hostgroups table.

    Columns of the [runtime_]mysql_galera_hostgroups table

    Column name Description writer_hostgroup: The ID of the hostgroup that refers to the WRITER node backup_writer_hostgroup The ID of the hostgroup that contains candidate WRITER servers reader_hostgroup The ID of the hostgroup that contains candidate READER servers offline_hostgroup The ID of the hostgroup that will eventually contain the WRITER node that will be put OFFLINE active 1 (Yes) to inidicate that this configuration should be used; 0 (No) - otherwise max_writers The maximum number of WRITER nodes that must operate simultaneously. For most cases, a reasonable value is 1. The value in this column may not exceed the total number of nodes. writer_is_also_reader 1 (Yes) to keep the given node in both reader_hostgroup and writer_hostgroup. 0 (No) to remove the given node from reader_hostgroup if it already belongs to writer_hostgroup. max_transactions_behind As soon as the value of :variable:wsrep_local_recv_queue exceeds the number stored in this column the given node is set to OFFLINE. Set the value carefully based on the behaviour of the node. comment Helpful extra information about the given node

    Make sure that the variable mysql-server_version refers to the correct version. For Percona XtraDB Cluster 8.0, set it to 8.0 accordingly:

    mysql> UPDATE GLOBAL_VARIABLES\nSET variable_value='8.0'\nWHERE variable_name='mysql-server_version';\n\nmysql> LOAD MYSQL SERVERS TO RUNTIME;\nmysql> SAVE MYSQL SERVERS TO DISK;\n

    See also

    Percona Blogpost: ProxySQL Native Support for Percona XtraDB Cluster (PXC) https://www.percona.com/blog/2019/02/20/proxysql-native-support-for-percona-xtradb-cluster-pxc/

    Given the nodes from the mysql_servers table, you may set up the hostgroups as follows:

    mysql> INSERT INTO mysql_galera_hostgroups (\nwriter_hostgroup, backup_writer_hostgroup, reader_hostgroup,\noffline_hostgroup, active, max_writers, writer_is_also_reader,\nmax_transactions_behind)\nVALUES (10, 12, 11, 13, 1, 1, 2, 100);\n

    This command configures ProxySQL as follows:

    WRITER hostgroup

    hostgroup `10`\n

    READER hostgroup

    hostgroup `11`\n

    BACKUP WRITER hostgroup

    hostgroup `12`\n

    OFFLINE hostgroup

    hostgroup `13`\n

    Set up ProxySQL query rules for read/write split using the mysql_query_rules table:

    mysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',10,1,'^SELECT.*FOR UPDATE',1);\n\nmysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',11,1,'^SELECT ',1);\n\nmysql> LOAD MYSQL QUERY RULES TO RUNTIME;\nmysql> SAVE MYSQL QUERY RULES TO DISK;\n\nmysql> select hostgroup_id,hostname,port,status,weight from runtime_mysql_servers;\n
    Expected output
    +--------------+----------------+------+--------+--------+\n| hostgroup_id | hostname       | port | status | weight |\n+--------------+----------------+------+--------+--------+\n| 10           | 192.168.70.73 | 3306  | ONLINE | 1000   |\n| 11           | 192.168.70.72 | 3306  | ONLINE | 1000   |\n| 11           | 192.168.70.71 | 3306  | ONLINE | 1000   |\n| 12           | 192.168.70.72 | 3306  | ONLINE | 1000   |\n| 12           | 192.168.70.71 | 3306  | ONLINE | 1000   |\n+--------------+----------------+------+--------+--------+\n5 rows in set (0.00 sec)\n

    See also

    ProxySQL Blog: MySQL read/write split with ProxySQL https://proxysql.com/blog/configure-read-write-split/ ProxySQL Documentation: mysql_query_rules table https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules

    "},{"location":"virtual-sandbox.html#proxysql-failover-behavior","title":"ProxySQL failover behavior","text":"

    Notice that all servers were inserted into the mysql_servers table with the READER hostgroup set to 10 (see the value of the hostgroup_id column):

    mysql> SELECT * FROM mysql_servers;\n
    Expected output
    +--------------+---------------+------+--------+     +---------+\n| hostgroup_id | hostname      | port | weight | ... | comment |\n+--------------+---------------+------+--------+     +---------+\n| 10           | 192.168.70.71 | 3306 | 1000   |     |         |\n| 10           | 192.168.70.72 | 3306 | 1000   |     |         |\n| 10           | 192.168.70.73 | 3306 | 1000   |     |         |\n+--------------+---------------+------+--------+     +---------+\n3 rows in set (0.00 sec)\n

    This configuration implies that ProxySQL elects the writer automatically. If the elected writer goes offline, ProxySQL assigns another (failover). You might tweak this mechanism by assigning a higher weight to a selected node. ProxySQL directs all write requests to this node. However, it also becomes the mostly utilized node for reading requests. In case of a failback (a node is put back online), the node with the highest weight is automatically elected for write requests.

    "},{"location":"virtual-sandbox.html#creating-a-proxysql-monitoring-user","title":"Creating a ProxySQL monitoring user","text":"

    To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE privilege on any node in the cluster and configure the user in ProxySQL.

    The following example shows how to add a monitoring user on Node 2:

    mysql> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password BY 'ProxySQLPa55';\nmysql> GRANT USAGE ON *.* TO 'proxysql'@'%';\n

    The following example shows how to configure this user on the ProxySQL node:

    mysql> UPDATE global_variables SET variable_value='proxysql'\nWHERE variable_name='mysql-monitor_username';\n\nmysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\nWHERE variable_name='mysql-monitor_password';\n
    "},{"location":"virtual-sandbox.html#saving-and-loading-the-configuration","title":"Saving and loading the configuration","text":"

    To load this configuration at runtime, issue the LOAD command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue the SAVE command.

    mysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql> SAVE MYSQL VARIABLES TO DISK;\n

    To ensure that monitoring is enabled, check the monitoring logs:

    mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+----------------------+---------------+\n| hostname      | port | time_start_us    | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627                 | NULL          |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447                 | NULL          |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
    mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+-------------------+------------+\n| hostname      | port | time_start_us    | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948               | NULL       |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803               | NULL       |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711               | NULL       |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783               | NULL       |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631               | NULL       |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542               | NULL       |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n

    The previous examples show that ProxySQL is able to connect and ping the nodes you added.

    To enable monitoring of these nodes, load them at runtime:

    mysql> LOAD MYSQL SERVERS TO RUNTIME;\n
    "},{"location":"virtual-sandbox.html#creating-proxysql-client-user","title":"Creating ProxySQL Client User","text":"

    ProxySQL must have users that can access backend nodes to manage connections.

    To add a user, insert credentials into mysql_users table:

    mysql> INSERT INTO mysql_users (username,password) VALUES ('appuser','$3kRetp@$sW0rd');\n

    The example of the output is the following:

    Expected output
    Query OK, 1 row affected (0.00 sec)\n

    Note

    ProxySQL currently doesn\u2019t encrypt passwords.

    See also

    More information about password encryption in ProxySQL

    Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):

    mysql> LOAD MYSQL USERS TO RUNTIME;\nmysql> SAVE MYSQL USERS TO DISK;\n

    To confirm that the user has been set up correctly, you can try to log in:

    root@proxysql:~# mysql -u appuser -p$3kRetp@$sW0rd -h 127.0.0.1 -P 6033\n
    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n

    To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

    mysql> CREATE USER 'appuser'@'192.168.70.74'\nIDENTIFIED WITH mysql_native_password by '$3kRetp@$sW0rd';\n\nmysql> GRANT ALL ON *.* TO 'appuser'@'192.168.70.74';\n
    "},{"location":"virtual-sandbox.html#testing-the-cluster-with-the-sysbench-benchmark-tool","title":"Testing the cluster with the sysbench benchmark tool","text":"

    After you set up Percona XtraDB Cluster in your testing environment, you can test it using the sysbench benchmarking tool.

    1. Create a database (sysbenchdb in this example; you can use a different name):

      mysql> CREATE DATABASE sysbenchdb;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Populate the table with data for the benchmark. Note that you should pass the database you have created as the value of the --mysql-db parameter, and the name of the user who has full access to this database as the value of the --mysql-user parameter:

      $ sysbench /usr/share/sysbench/oltp_insert.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--table-size=1000 prepare\n
    3. Run the benchmark on port 6033:

      $ sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--skip-trx=true --table-size=1000 --time=100 --report-interval=10 run\n

    Related sections and additional reading

    "},{"location":"virtual-sandbox.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-files-index.html","title":"Index of files created by PXC","text":"

    This file is used for Primary Component recovery feature. This file is created once primary component is formed or changed, so you can get the latest primary component this node was in. And this file is deleted when the node is shutdown gracefully.

    First part contains the node UUID information. Second part contains the view information. View information is written between #vwbeg and #vwend. View information consists of:

    * view_id: [view_type] [view_uuid] [view_seq]. - `view_type` is always `3` which means primary view. `view_uuid` and `view_seq` identifies a unique view, which could be perceived as identifier of this primary component.\n\n* bootstrap: [bootstarp_or_not]. - it could be `0` or `1`, but it does not affect primary component recovery process now.\n\n* member: [node\u2019s uuid] [node\u2019s segment]. - it represents all nodes in this primary component.\n\n??? example \"Example of the file\"\n\n    ```{.text .no-copy}\n    my_uuid: c5d5d990-30ee-11e4-aab1-46d0ed84b408\n    #vwbeg\n    view_id: 3 bc85bd53-31ac-11e4-9895-1f2ce13f2542 2 \n    bootstrap: 0\n    member: bc85bd53-31ac-11e4-9895-1f2ce13f2542 0\n    member: c5d5d990-30ee-11e4-aab1-46d0ed84b408 0\n    #vwend\n    ```\n
    "},{"location":"wsrep-files-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-provider-index.html","title":"Index of wsrep_provider options","text":"

    The following variables can be set and checked in the wsrep_provider_options variable. The value of the variable can be changed in the MySQL configuration file, my.cnf, or by setting the variable value in the MySQL client.

    To change the value in my.cnf, the following syntax should be used:

    $ wsrep_provider_options=\"variable1=value1;[variable2=value2]\"\n

    For example to set the size of the Galera buffer storage to 512 MB, specify the following in my.cnf:

    $ wsrep_provider_options=\"gcache.size=512M\"\n

    Dynamic variables can be changed from the MySQL client using the SET GLOBAL command. For example, to change the value of the pc.ignore_sb, use the following command:

    mysql> SET GLOBAL wsrep_provider_options=\"pc.ignore_sb=true\";\n
    "},{"location":"wsrep-provider-index.html#index","title":"Index","text":""},{"location":"wsrep-provider-index.html#base_dir","title":"base_dir","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of datadir

    This variable specifies the data directory.

    "},{"location":"wsrep-provider-index.html#base_host","title":"base_host","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address

    This variable sets the value of the node\u2019s base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.

    "},{"location":"wsrep-provider-index.html#base_port","title":"base_port","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 4567

    This variable sets the port on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.

    "},{"location":"wsrep-provider-index.html#certlog_conflicts","title":"cert.log_conflicts","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: no

    This variable is used to specify if the details of the certification failures should be logged.

    "},{"location":"wsrep-provider-index.html#certoptimistic_pa","title":"cert.optimistic_pa","text":"

    Enabled

    Allows the full range of parallelization as determined by the certification\nalgorithm.\n

    Disabled

    Limits the parallel applying window so that it does not exceed the parallel\napplying window seen on the source. In this case, the action starts applying\nno sooner than all actions on the source are committed.\n
    Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: No

    See also

    Galera Cluster Documentation: * Parameter: cert.optimistic_pa * Setting parallel slave threads

    "},{"location":"wsrep-provider-index.html#debug","title":"debug","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: no

    When this variable is set to yes, it will enable debugging.

    "},{"location":"wsrep-provider-index.html#evsauto_evict","title":"evs.auto_evict","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0

    Number of entries allowed on delayed list until auto eviction takes place. Setting value to 0 disables auto eviction protocol on the node, though node response times will still be monitored. EVS protocol version (evs.version) 1 is required to enable auto eviction.

    "},{"location":"wsrep-provider-index.html#evscausal_keepalive_period","title":"evs.causal_keepalive_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of evs.keepalive_period

    This variable is used for development purposes and shouldn\u2019t be used by regular users.

    "},{"location":"wsrep-provider-index.html#evsdebug_log_mask","title":"evs.debug_log_mask","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0x1

    This variable is used for EVS (Extended Virtual Synchrony) debugging. It can be used only when wsrep_debug is set to ON.

    "},{"location":"wsrep-provider-index.html#evsdelay_margin","title":"evs.delay_margin","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT1S

    Time period that a node can delay its response from expected until it is added to delayed list. The value must be higher than the highest RTT between nodes.

    "},{"location":"wsrep-provider-index.html#evsdelayed_keep_period","title":"evs.delayed_keep_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S

    Time period that node is required to remain responsive until one entry is removed from delayed list.

    "},{"location":"wsrep-provider-index.html#evsevict","title":"evs.evict","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes

    Manual eviction can be triggered by setting the evs.evict to a certain node value. Setting the evs.evict to an empty string will clear the evict list on the node where it was set.

    "},{"location":"wsrep-provider-index.html#evsinactive_check_period","title":"evs.inactive_check_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT0.5S

    This variable defines how often to check for peer inactivity.

    "},{"location":"wsrep-provider-index.html#evsinactive_timeout","title":"evs.inactive_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT15S

    This variable defines the inactivity limit, once this limit is reached the node will be considered dead.

    "},{"location":"wsrep-provider-index.html#evsinfo_log_mask","title":"evs.info_log_mask","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable is used for controlling the extra EVS info logging.

    "},{"location":"wsrep-provider-index.html#evsinstall_timeout","title":"evs.install_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT7.5S

    This variable defines the timeout on waiting for install message acknowledgments.

    "},{"location":"wsrep-provider-index.html#evsjoin_retrans_period","title":"evs.join_retrans_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S

    This variable defines how often to retransmit EVS join messages when forming cluster membership.

    "},{"location":"wsrep-provider-index.html#evskeepalive_period","title":"evs.keepalive_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S

    This variable defines how often to emit keepalive beacons (in the absence of any other traffic).

    "},{"location":"wsrep-provider-index.html#evsmax_install_timeouts","title":"evs.max_install_timeouts","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1

    This variable defines how many membership install rounds to try before giving up (total rounds will be evs.max_install_timeouts + 2).

    "},{"location":"wsrep-provider-index.html#evssend_window","title":"evs.send_window","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 10

    This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example,512). The value must not be less than evs.user_send_window.

    "},{"location":"wsrep-provider-index.html#evsstats_report_period","title":"evs.stats_report_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1M

    This variable defines the control period of EVS statistics reporting.

    "},{"location":"wsrep-provider-index.html#evssuspect_timeout","title":"evs.suspect_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S

    This variable defines the inactivity period after which the node is \u201csuspected\u201d to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before evs.inactive_timeout is reached.

    "},{"location":"wsrep-provider-index.html#evsuse_aggregate","title":"evs.use_aggregate","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    When this variable is enabled, smaller packets will be aggregated into one.

    "},{"location":"wsrep-provider-index.html#evsuser_send_window","title":"evs.user_send_window","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 4

    This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example, 512).

    "},{"location":"wsrep-provider-index.html#evsversion","title":"evs.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable defines the EVS protocol version. Auto eviction is enabled when this variable is set to 1. Default 0 is set for backwards compatibility.

    "},{"location":"wsrep-provider-index.html#evsview_forget_timeout","title":"evs.view_forget_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: P1D

    This variable defines the timeout after which past views will be dropped from history.

    "},{"location":"wsrep-provider-index.html#gcachedir","title":"gcache.dir","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: datadir

    This variable can be used to define the location of the galera.cache file.

    "},{"location":"wsrep-provider-index.html#gcachefreeze_purge_at_seqno","title":"gcache.freeze_purge_at_seqno","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0

    This variable controls the purging of the gcache and enables retaining more data in it. This variable makes it possible to use IST (Incremental State Transfer) when the node rejoins instead of SST (State Snapshot Transfer).

    Set this variable on an existing node of the cluster (that will continue to be part of the cluster and can act as a potential donor node). This node continues to retain the write-sets and allows restarting the node to rejoin by using IST.

    See also

    Percona Database Performance Blog:

    The gcache.freeze_purge_at_seqno variable takes three values:

    -1 (default)

    No freezing of gcache, the purge operates as normal.

    A valid seqno in gcache

    The freeze purge of write-sets may not be smaller than the selected seqno. The best way to select an optimal value is to use the value of the variable :variable:wsrep_last_applied from the node that you plan to shut down.

    now The freeze purge of write-sets is no less than the smallest seqno currently in gcache. Using this value results in freezing the gcache-purge instantly. Use this value if selecting a valid seqno in gcache is difficult.

    "},{"location":"wsrep-provider-index.html#gcachekeep_pages_count","title":"gcache.keep_pages_count","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0

    This variable is used to limit the number of overflow pages rather than the total memory occupied by all overflow pages. Whenever gcache.keep_pages_count is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest).

    Whenever either the gcache.keep_pages_count or the gcache.keep_pages_size variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.

    "},{"location":"wsrep-provider-index.html#gcachekeep_pages_size","title":"gcache.keep_pages_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: No Default Value: 0

    This variable is used to limit the total size of overflow pages rather than the count of all overflow pages. Whenever gcache.keep_pages_size is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest) until the total size is below the specified value.

    Whenever either the gcache.keep_pages_count or the gcache.keep_pages_size variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.

    "},{"location":"wsrep-provider-index.html#gcachemem_size","title":"gcache.mem_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable has been deprecated in 5.6.22-25.8 and shouldn\u2019t be used as it could cause a node to crash.

    This variable was used to define how much RAM is available for the system.

    "},{"location":"wsrep-provider-index.html#gcachename","title":"gcache.name","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql/galera.cache

    This variable can be used to specify the name of the Galera cache file.

    "},{"location":"wsrep-provider-index.html#gcachepage_size","title":"gcache.page_size","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: 128M

    Size of the page files in page storage. The limit on overall page storage is the size of the disk. Pages are prefixed by gcache.page.

    See also

    "},{"location":"wsrep-provider-index.html#gcacherecover","title":"gcache.recover","text":"Option Description Command line: No Configuration file: Yes Scope: Global Dynamic: No Default value: No

    Attempts to recover a node\u2019s gcache file to a usable state on startup. If the node can successfully recover the gcache file, the node can provide IST to the remaining nodes. This ability can reduce the time needed to bring up the cluster.

    An example of enabling the variable in the configuration file:

    wsrep_provider_options=\"gcache.recover=yes\"\n
    "},{"location":"wsrep-provider-index.html#gcachesize","title":"gcache.size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 128M

    Size of the transaction cache for Galera replication. This defines the size of the galera.cache file which is used as source for IST. The bigger the value of this variable, the better are chances that the re-joining node will get IST instead of SST.

    "},{"location":"wsrep-provider-index.html#gcommthread_prio","title":"gcomm.thread_prio","text":"

    Using this option, you can raise the priority of the gcomm thread to a higher level than it normally uses.

    The format for this variable is: <policy>:<priority>. The priority value is an integer.

    other

    Default time-sharing scheduling in Linux. The threads can run\nuntil blocked by an I/O request or preempted by higher priorities or\nsuperior scheduling designations.\n

    fifo

    First-in First-out (FIFO) scheduling. These threads always immediately\npreempt any currently running other, batch or idle threads. They can run\nuntil they are either blocked by an I/O request or preempted by a FIFO thread\nof a higher priority.\n

    rr

    Round-robin scheduling. These threads always preempt any currently running\nother, batch or idle threads. The scheduler allows these threads to run for a\nfixed period of a time. If the thread is still running when this time period is\nexceeded, they are stopped and moved to the end of the list, allowing another\nround-robin thread of the same priority to run in their place. They can\notherwise continue to run until they are blocked by an I/O request or are\npreempted by threads of a higher priority.\n

    See also

    For information, see the Galera Cluster documentation

    "},{"location":"wsrep-provider-index.html#gcsfc_auto_evict_threshold","title":"gcs.fc_auto_evict_threshold","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0.75

    Implemented in Percona XtraDB Cluster 8.0.33-25.

    Defines the threshold that must be reached or crossed before a node is evicted from the cluster. This variable is a ratio of the gcs.fc_auto_evict_window variable. The default value is .075, but the value can be set to any value between 0.0 and 1.0.

    "},{"location":"wsrep-provider-index.html#gcsfc_auto_evict_window","title":"gcs.fc_auto_evict_window","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0

    Implemented in Percona XtraDB Cluster 8.0.33-25.

    The variable defines the time window width within which flow controls are observed. The time span of the window is [now - gcs.fc_audot_evict_window, now]. The window is constantly moving ahead as the time passes. And now, within this window if the flow control summary time >= (gcs.fc_audot-evict_window * gcs.fc_audot_evict_threshold), the node self-leaves the cluster.

    The default value is 0, which means that the feature is disabled.

    The maximum value is DBL_MAX.

    "},{"location":"wsrep-provider-index.html#gcsfc_debug","title":"gcs.fc_debug","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable specifies after how many writesets the debug statistics about SST flow control will be posted.

    "},{"location":"wsrep-provider-index.html#gcsfc_factor","title":"gcs.fc_factor","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    This variable is used for replication flow control. Replication is resumed when the replica queue drops below gcs.fc_factor * gcs.fc_limit.

    "},{"location":"wsrep-provider-index.html#gcsfc_limit","title":"gcs.fc_limit","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 100

    This variable is used for replication flow control. Replication is paused when the replica queue exceeds this limit. In the default operation mode, flow control limit is dynamically recalculated based on the amount of nodes in the cluster, but this recalculation can be turned off with use of the gcs.fc_master_slave variable to make manual setting of the gcs.fc_limit having an effect (e.g., for configurations when writing is done to a single node in Percona XtraDB Cluster).

    "},{"location":"wsrep-provider-index.html#gcsfc_master_slave","title":"gcs.fc_master_slave","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: NO Default Value: NO

    This variable is used to specify if there is only one source node in the cluster. It affects whether flow control limit is recalculated dynamically (when NO) or not (when YES).

    "},{"location":"wsrep-provider-index.html#gcsmax_packet_size","title":"gcs.max_packet_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 64500

    This variable is used to specify the writeset size after which they will be fragmented.

    "},{"location":"wsrep-provider-index.html#gcsmax_throttle","title":"gcs.max_throttle","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25

    This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to 0.0 if stopping replication is acceptable in order to finish state transfer.

    "},{"location":"wsrep-provider-index.html#gcsrecv_q_hard_limit","title":"gcs.recv_q_hard_limit","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 9223372036854775807

    This variable specifies the maximum allowed size of the receive queue. This should normally be (RAM + swap) / 2. If this limit is exceeded, Galera will abort the server.

    "},{"location":"wsrep-provider-index.html#gcsrecv_q_soft_limit","title":"gcs.recv_q_soft_limit","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25

    This variable specifies the fraction of the gcs.recv_q_hard_limit after which replication rate will be throttled.

    "},{"location":"wsrep-provider-index.html#gcssync_donor","title":"gcs.sync_donor","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No

    This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to YES, the whole cluster will be blocked if the donor node is blocked with SST.

    "},{"location":"wsrep-provider-index.html#gmcastlisten_addr","title":"gmcast.listen_addr","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: tcp://0.0.0.0:4567

    This variable defines the address on which the node listens to connections from other nodes in the cluster.

    "},{"location":"wsrep-provider-index.html#gmcastmcast_addr","title":"gmcast.mcast_addr","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: None

    This variable should be set up if UDP multicast should be used for replication.

    "},{"location":"wsrep-provider-index.html#gmcastmcast_ttl","title":"gmcast.mcast_ttl","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1

    This variable can be used to define TTL for multicast packets.

    "},{"location":"wsrep-provider-index.html#gmcastpeer_timeout","title":"gmcast.peer_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S

    This variable specifies the connection timeout to initiate message relaying.

    "},{"location":"wsrep-provider-index.html#gmcastsegment","title":"gmcast.segment","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable specifies the group segment this member should be a part of. Same segment members are treated as equally physically close.

    "},{"location":"wsrep-provider-index.html#gmcasttime_wait","title":"gmcast.time_wait","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S

    This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.

    "},{"location":"wsrep-provider-index.html#gmcastversion","title":"gmcast.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable shows which gmcast protocol version is being used.

    "},{"location":"wsrep-provider-index.html#istrecv_addr","title":"ist.recv_addr","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address

    This variable specifies the address on which the node listens for Incremental State Transfer (IST).

    "},{"location":"wsrep-provider-index.html#pcannounce_timeout","title":"pc.announce_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S

    Cluster joining announcements are sent every \u00bd second for this period of time or less if other nodes are discovered.

    "},{"location":"wsrep-provider-index.html#pcchecksum","title":"pc.checksum","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    This variable controls whether replicated messages should be checksummed or not.

    "},{"location":"wsrep-provider-index.html#pcignore_quorum","title":"pc.ignore_quorum","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false

    When this variable is set to TRUE, the node will completely ignore quorum calculations. This should be used with extreme caution even in source-replica setups, because replicas won\u2019t automatically reconnect to source in this case.

    "},{"location":"wsrep-provider-index.html#pcignore_sb","title":"pc.ignore_sb","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: false

    When this variable is set to TRUE, the node will process updates even in the case of a split brain. This should be used with extreme caution in multi-source setup, but should simplify things in source-replica cluster (especially if only 2 nodes are used).

    "},{"location":"wsrep-provider-index.html#pclinger","title":"pc.linger","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT20S

    This variable specifies the period for which the PC protocol waits for EVS termination.

    "},{"location":"wsrep-provider-index.html#pcnpvo","title":"pc.npvo","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false

    When this variable is set to TRUE, more recent primary components override older ones in case of conflicting primaries.

    "},{"location":"wsrep-provider-index.html#pcrecovery","title":"pc.recovery","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    When this variable is set to true, the node stores the Primary Component state to disk. The Primary Component can then recover automatically when all nodes that were part of the last saved state re-establish communication with each other. This feature allows automatic recovery from full cluster crashes, such as in the case of a data center power outage. A subsequent graceful full cluster restart will require explicit bootstrapping for a new Primary Component.

    "},{"location":"wsrep-provider-index.html#pcversion","title":"pc.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This status variable is used to check which PC protocol version is used.

    "},{"location":"wsrep-provider-index.html#pcwait_prim","title":"pc.wait_prim","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    When set to TRUE, the node waits for a primary component for the period of time specified in pc.wait_prim_timeout. This is useful to bring up a non-primary component and make it primary with pc.bootstrap.

    "},{"location":"wsrep-provider-index.html#pcwait_prim_timeout","title":"pc.wait_prim_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT30S

    This variable is used to specify the period of time to wait for a primary component.

    "},{"location":"wsrep-provider-index.html#pcwait_restored_prim_timeout","title":"pc.wait_restored_prim_timeout","text":"

    Introduced in Percona XtraDB Cluster 8.0.33-25.

    Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic No Default Value: PT0S

    This variable specifies the wait period for a primary component when the cluster restores the primary component from the gvwstate.dat file after an outage.

    The default value is PT0S (zero seconds). The node waits for an infinite time, which is the current behavior.

    You can define a wait time with PTNS, replace the N value with the number of seconds. For example, to wait for 90 seconds, set the value to PT90S.

    "},{"location":"wsrep-provider-index.html#pcweight","title":"pc.weight","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    This variable specifies the node weight that\u2019s going to be used for Weighted Quorum calculations.

    "},{"location":"wsrep-provider-index.html#protonetbackend","title":"protonet.backend","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: asio

    This variable is used to define which transport backend should be used. Currently only ASIO is supported.

    "},{"location":"wsrep-provider-index.html#protonetversion","title":"protonet.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This status variable is used to check which transport backend protocol version is used.

    "},{"location":"wsrep-provider-index.html#replcausal_read_timeout","title":"repl.causal_read_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S

    This variable specifies the causal read timeout.

    "},{"location":"wsrep-provider-index.html#replcommit_order","title":"repl.commit_order","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 3

    This variable is used to specify out-of-order committing (which is used to improve parallel applying performance). The following values are available:

    "},{"location":"wsrep-provider-index.html#replkey_format","title":"repl.key_format","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: FLAT8

    This variable is used to specify the replication key format. The following values are available:

    "},{"location":"wsrep-provider-index.html#replmax_ws_size","title":"repl.max_ws_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2147483647

    This variable is used to specify the maximum size of a write-set in bytes. This is limited to 2 gygabytes.

    "},{"location":"wsrep-provider-index.html#replproto_max","title":"repl.proto_max","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 7

    This variable is used to specify the highest communication protocol version to accept in the cluster. Used only for debugging.

    "},{"location":"wsrep-provider-index.html#socketchecksum","title":"socket.checksum","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2

    This variable is used to choose the checksum algorithm for network packets. The CRC32-C option is optimized and may be hardware accelerated on Intel CPUs. The following values are available:

    The following is an example of the variable use:

    wsrep_provider_options=\"socket.checksum=2\"\n
    "},{"location":"wsrep-provider-index.html#socketssl","title":"socket.ssl","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No

    This variable is used to specify if SSL encryption should be used.

    "},{"location":"wsrep-provider-index.html#socketssl_ca","title":"socket.ssl_ca","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No

    This variable is used to specify the path to the Certificate Authority (CA) certificate file.

    "},{"location":"wsrep-provider-index.html#socketssl_cert","title":"socket.ssl_cert","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No

    This variable is used to specify the path to the server\u2019s certificate file (in PEM format).

    "},{"location":"wsrep-provider-index.html#socketssl_key","title":"socket.ssl_key","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No

    This variable is used to specify the path to the server\u2019s private key file (in PEM format).

    "},{"location":"wsrep-provider-index.html#socketssl_compression","title":"socket.ssl_compression","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: Yes

    This variable is used to specify if the SSL compression is to be used.

    "},{"location":"wsrep-provider-index.html#socketssl_cipher","title":"socket.ssl_cipher","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: AES128-SHA

    This variable is used to specify what cypher will be used for encryption.

    "},{"location":"wsrep-provider-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-status-index.html","title":"Index of wsrep status variables","text":""},{"location":"wsrep-status-index.html#wsrep_apply_oooe","title":"wsrep_apply_oooe","text":"

    This variable shows parallelization efficiency, how often writests have been applied out of order.

    See also

    Galera status variable: wsrep_apply_oooe

    "},{"location":"wsrep-status-index.html#wsrep_apply_oool","title":"wsrep_apply_oool","text":"

    This variable shows how often a writeset with a higher sequence number was applied before one with a lower sequence number.

    See also

    Galera status variable: wsrep_apply_oool

    "},{"location":"wsrep-status-index.html#wsrep_apply_window","title":"wsrep_apply_window","text":"

    Average distance between highest and lowest concurrently applied sequence numbers.

    See also

    Galera status variable: wsrep_apply_window

    "},{"location":"wsrep-status-index.html#wsrep_causal_reads","title":"wsrep_causal_reads","text":"

    Shows the number of writesets processed while the variable wsrep_causal_reads was set to ON.

    See also

    MySQL wsrep options: wsrep_causal_reads

    "},{"location":"wsrep-status-index.html#wsrep_cert_bucket_count","title":"wsrep_cert_bucket_count","text":"

    This variable, shows the number of cells in the certification index hash-table.

    "},{"location":"wsrep-status-index.html#wsrep_cert_deps_distance","title":"wsrep_cert_deps_distance","text":"

    Average distance between highest and lowest sequence number that can be possibly applied in parallel.

    See also

    Galera status variable: wsrep_cert_deps_distance

    "},{"location":"wsrep-status-index.html#wsrep_cert_index_size","title":"wsrep_cert_index_size","text":"

    Number of entries in the certification index.

    See also

    Galera status variable: wsrep_cert_index_size

    "},{"location":"wsrep-status-index.html#wsrep_cert_interval","title":"wsrep_cert_interval","text":"

    Average number of write-sets received while a transaction replicates.

    See also

    Galera status variable: wsrep_cert_interval

    "},{"location":"wsrep-status-index.html#wsrep_cluster_conf_id","title":"wsrep_cluster_conf_id","text":"

    Number of cluster membership changes that have taken place.

    See also

    Galera status variable: wsrep_cluster_conf_id

    "},{"location":"wsrep-status-index.html#wsrep_cluster_size","title":"wsrep_cluster_size","text":"

    Current number of nodes in the cluster.

    See also

    Galera status variable: wsrep_cluster_size

    "},{"location":"wsrep-status-index.html#wsrep_cluster_state_uuid","title":"wsrep_cluster_state_uuid","text":"

    This variable contains UUID state of the cluster. When this value is the same as the one in wsrep_local_state_uuid, node is synced with the cluster.

    See also

    Galera status variable: wsrep_cluster_state_uuid

    "},{"location":"wsrep-status-index.html#wsrep_cluster_status","title":"wsrep_cluster_status","text":"

    Status of the cluster component. Possible values are:

    See also

    Galera status variable: wsrep_cluster_status

    "},{"location":"wsrep-status-index.html#wsrep_commit_oooe","title":"wsrep_commit_oooe","text":"

    This variable shows how often a transaction was committed out of order.

    See also

    Galera status variable: wsrep_commit_oooe

    "},{"location":"wsrep-status-index.html#wsrep_commit_oool","title":"wsrep_commit_oool","text":"

    This variable currently has no meaning.

    See also

    Galera status variable: wsrep_commit_oool

    "},{"location":"wsrep-status-index.html#wsrep_commit_window","title":"wsrep_commit_window","text":"

    Average distance between highest and lowest concurrently committed sequence number.

    See also

    Galera status variable: wsrep_commit_window

    "},{"location":"wsrep-status-index.html#wsrep_connected","title":"wsrep_connected","text":"

    This variable shows if the node is connected to the cluster. If the value is OFF, the node has not yet connected to any of the cluster components. This may be due to misconfiguration.

    See also

    Galera status variable: wsrep_connected

    "},{"location":"wsrep-status-index.html#wsrep_evs_delayed","title":"wsrep_evs_delayed","text":"

    Comma separated list of nodes that are considered delayed. The node format is <uuid>:<address>:<count>, where <count> is the number of entries on delayed list for that node.

    See also

    Galera status variable: wsrep_evs_delayed

    "},{"location":"wsrep-status-index.html#wsrep_evs_evict_list","title":"wsrep_evs_evict_list","text":"

    List of UUIDs of the evicted nodes.

    See also

    Galera status variable: wsrep_evs_evict_list

    "},{"location":"wsrep-status-index.html#wsrep_evs_repl_latency","title":"wsrep_evs_repl_latency","text":"

    This status variable provides information regarding group communication replication latency. This latency is measured in seconds from when a message is sent out to when a message is received.

    The format of the output is <min>/<avg>/<max>/<std_dev>/<sample_size>.

    See also

    Galera status variable: wsrep_evs_repl_latency

    "},{"location":"wsrep-status-index.html#wsrep_evs_state","title":"wsrep_evs_state","text":"

    Internal EVS protocol state.

    See also

    Galera status variable: wsrep_evs_state

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_interval","title":"wsrep_flow_control_interval","text":"

    This variable shows the lower and upper limits for Galera flow control. The upper limit is the maximum allowed number of requests in the queue. If the queue reaches the upper limit, new requests are denied. As existing requests get processed, the queue decreases, and once it reaches the lower limit, new requests will be allowed again.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_high","title":"wsrep_flow_control_interval_high","text":"

    Shows the upper limit for flow control to trigger.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_low","title":"wsrep_flow_control_interval_low","text":"

    Shows the lower limit for flow control to stop.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_paused","title":"wsrep_flow_control_paused","text":"

    Time since the last status query that was paused due to flow control.

    See also

    Galera status variable: wsrep_flow_control_paused

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_paused_ns","title":"wsrep_flow_control_paused_ns","text":"

    Total time spent in a paused state measured in nanoseconds.

    See also

    Galera status variable: wsrep_flow_control_paused_ns

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_recv","title":"wsrep_flow_control_recv","text":"

    The number of FC_PAUSE events received since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.

    See also

    Galera status variable: wsrep_flow_control_recv

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_requested","title":"wsrep_flow_control_requested","text":"

    This variable returns whether or not a node requested a replication pause.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_sent","title":"wsrep_flow_control_sent","text":"

    The number of FC_PAUSE events sent since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.

    See also

    Galera status variable: wsrep_flow_control_sent

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_status","title":"wsrep_flow_control_status","text":"

    This variable shows whether a node has flow control enabled for normal traffic. It does not indicate the status of flow control during SST.

    "},{"location":"wsrep-status-index.html#wsrep_gcache_pool_size","title":"wsrep_gcache_pool_size","text":"

    This variable shows the size of the page pool and dynamic memory allocated for GCache (in bytes).

    "},{"location":"wsrep-status-index.html#wsrep_gcomm_uuid","title":"wsrep_gcomm_uuid","text":"

    This status variable exposes UUIDs in gvwstate.dat, which are Galera view IDs (thus unrelated to cluster state UUIDs). This UUID is unique for each node. You will need to know this value when using manual eviction feature.

    See also

    Galera status variable: wsrep_gcomm_uuid

    "},{"location":"wsrep-status-index.html#wsrep_incoming_addresses","title":"wsrep_incoming_addresses","text":"

    Shows the comma-separated list of incoming node addresses in the cluster.

    See also

    Galera status variable: wsrep_incoming_addresses

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_status","title":"wsrep_ist_receive_status","text":"

    This variable displays the progress of IST for joiner node. If IST is not running, the value is blank. If IST is running, the value is the percentage of transfer completed.

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_end","title":"wsrep_ist_receive_seqno_end","text":"

    The sequence number of the last transaction in IST.

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_current","title":"wsrep_ist_receive_seqno_current","text":"

    The sequence number of the current transaction in IST.

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_start","title":"wsrep_ist_receive_seqno_start","text":"

    The sequence number of the first transaction in IST.

    "},{"location":"wsrep-status-index.html#wsrep_last_applied","title":"wsrep_last_applied","text":"

    Sequence number of the last applied transaction.

    "},{"location":"wsrep-status-index.html#wsrep_last_committed","title":"wsrep_last_committed","text":"

    Sequence number of the last committed transaction.

    "},{"location":"wsrep-status-index.html#wsrep_local_bf_aborts","title":"wsrep_local_bf_aborts","text":"

    Number of local transactions that were aborted by replica transactions while being executed.

    See also

    Galera status variable: wsrep_local_bf_aborts

    "},{"location":"wsrep-status-index.html#wsrep_local_cached_downto","title":"wsrep_local_cached_downto","text":"

    The lowest sequence number in GCache. This information can be helpful with determining IST and SST. If the value is 0, then it means there are no writesets in GCache (usual for a single node).

    See also

    Galera status variable: wsrep_local_cached_downto

    "},{"location":"wsrep-status-index.html#wsrep_local_cert_failures","title":"wsrep_local_cert_failures","text":"

    Number of writesets that failed the certification test.

    See also

    Galera status variable: wsrep_local_cert_failures

    "},{"location":"wsrep-status-index.html#wsrep_local_commits","title":"wsrep_local_commits","text":"

    Number of writesets commited on the node.

    See also

    Galera status variable: wsrep_local_commits

    "},{"location":"wsrep-status-index.html#wsrep_local_index","title":"wsrep_local_index","text":"

    Node\u2019s index in the cluster.

    See also

    Galera status variable: wsrep_local_index

    "},{"location":"wsrep-status-index.html#wsrep_local_recv_queue","title":"wsrep_local_recv_queue","text":"

    Current length of the receive queue (that is, the number of writesets waiting to be applied).

    See also

    Galera status variable: wsrep_local_recv_queue

    "},{"location":"wsrep-status-index.html#wsrep_local_recv_queue_avg","title":"wsrep_local_recv_queue_avg","text":"

    Average length of the receive queue since the last status query. When this number is bigger than 0 this means node can\u2019t apply writesets as fast as they are received. This could be a sign that the node is overloaded and it may cause replication throttling.

    See also

    Galera status variable: wsrep_local_recv_queue_avg

    "},{"location":"wsrep-status-index.html#wsrep_local_replays","title":"wsrep_local_replays","text":"

    Number of transaction replays due to asymmetric lock granularity.

    See also

    Galera status variable: wsrep_local_replays

    "},{"location":"wsrep-status-index.html#wsrep_local_send_queue","title":"wsrep_local_send_queue","text":"

    Current length of the send queue (that is, the number of writesets waiting to be sent).

    See also

    Galera status variable: wsrep_local_send_queue

    "},{"location":"wsrep-status-index.html#wsrep_local_send_queue_avg","title":"wsrep_local_send_queue_avg","text":"

    Average length of the send queue since the last status query. When cluster experiences network throughput issues or replication throttling, this value will be significantly bigger than 0.

    See also

    Galera status variable: wsrep_local_send_queue_avg

    "},{"location":"wsrep-status-index.html#wsrep_local_state","title":"wsrep_local_state","text":"

    Internal Galera cluster FSM state number.

    See also

    Galera status variable: wsrep_local_state

    "},{"location":"wsrep-status-index.html#wsrep_local_state_comment","title":"wsrep_local_state_comment","text":"

    Internal number and the corresponding human-readable comment of the node\u2019s state. Possible values are:

    Num Comment Description 1 Joining Node is joining the cluster 2 Donor/Desynced Node is the donor to the node joining the cluster 3 Joined Node has joined the cluster 4 Synced Node is synced with the cluster

    See also

    Galera status variable: wsrep_local_state_comment

    "},{"location":"wsrep-status-index.html#wsrep_local_state_uuid","title":"wsrep_local_state_uuid","text":"

    The UUID of the state stored on the node.

    See also

    Galera status variable: wsrep_local_state_uuid

    "},{"location":"wsrep-status-index.html#wsrep_monitor_status","title":"wsrep_monitor_status","text":"

    The status of the local monitor (local and replicating actions), apply monitor (apply actions of write-set), and commit monitor (commit actions of write sets). In the value of this variable, each monitor (L: Local, A: Apply, C: Commit) is represented as a last_entered, and last_left pair:

    wsrep_monitor_status (L/A/C)    [ ( 7, 5), (2, 2), ( 2, 2) ]\n

    last_entered

    Shows which transaction or write-set has recently entered the queue.

    last_left

    Shows which last transaction or write-set has been executed and left the queue.

    According to the Galera protocol, transactions can be applied in parallel but must be committed in a given order. This rule implies that there can be multiple transactions in the apply state at a given point of time but transactions are committed sequentially.

    See also

    Galera Documentation: Database replication

    "},{"location":"wsrep-status-index.html#wsrep_protocol_version","title":"wsrep_protocol_version","text":"

    Version of the wsrep protocol used.

    See also

    Galera status variable: wsrep_protocol_version

    "},{"location":"wsrep-status-index.html#wsrep_provider_name","title":"wsrep_provider_name","text":"

    Name of the wsrep provider (usually Galera).

    See also

    Galera status variable: wsrep_provider_name

    "},{"location":"wsrep-status-index.html#wsrep_provider_vendor","title":"wsrep_provider_vendor","text":"

    Name of the wsrep provider vendor (usually Codership Oy)

    See also

    Galera status variable: wsrep_provider_vendor

    "},{"location":"wsrep-status-index.html#wsrep_provider_version","title":"wsrep_provider_version","text":"

    Current version of the wsrep provider.

    See also

    Galera status variable: wsrep_provider_version

    "},{"location":"wsrep-status-index.html#wsrep_ready","title":"wsrep_ready","text":"

    This variable shows if node is ready to accept queries. If status is OFF, almost all queries will fail with ERROR 1047 (08S01) Unknown Command error (unless the wsrep_on variable is set to 0).

    See also

    Galera status variable: wsrep_ready

    "},{"location":"wsrep-status-index.html#wsrep_received","title":"wsrep_received","text":"

    Total number of writesets received from other nodes.

    See also

    Galera status variable: wsrep_received

    "},{"location":"wsrep-status-index.html#wsrep_received_bytes","title":"wsrep_received_bytes","text":"

    Total size (in bytes) of writesets received from other nodes.

    "},{"location":"wsrep-status-index.html#wsrep_repl_data_bytes","title":"wsrep_repl_data_bytes","text":"

    Total size (in bytes) of data replicated.

    "},{"location":"wsrep-status-index.html#wsrep_repl_keys","title":"wsrep_repl_keys","text":"

    Total number of keys replicated.

    "},{"location":"wsrep-status-index.html#wsrep_repl_keys_bytes","title":"wsrep_repl_keys_bytes","text":"

    Total size (in bytes) of keys replicated.

    "},{"location":"wsrep-status-index.html#wsrep_repl_other_bytes","title":"wsrep_repl_other_bytes","text":"

    Total size of other bits replicated.

    "},{"location":"wsrep-status-index.html#wsrep_replicated","title":"wsrep_replicated","text":"

    Total number of writesets sent to other nodes.

    See also

    Galera status variable: wsrep_replicated

    "},{"location":"wsrep-status-index.html#wsrep_replicated_bytes","title":"wsrep_replicated_bytes","text":"

    Total size of replicated writesets. To compute the actual size of bytes sent over network to cluster peers, multiply the value of this variable by the number of cluster peers in the given network segment.

    See also

    Galera status variable: wsrep_replicated_bytes

    "},{"location":"wsrep-status-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-system-index.html","title":"Index of wsrep system variables","text":"

    Percona XtraDB Cluster introduces a number of MySQL system variables related to write-set replication.

    "},{"location":"wsrep-system-index.html#pxc_encrypt_cluster_traffic","title":"pxc_encrypt_cluster_traffic","text":"Option Description Command Line: --pxc-encrypt-cluster-traffic Config File: Yes Scope: Global Dynamic: No Default Value: ON

    Enables automatic configuration of SSL encryption. When disabled, you need to configure SSL manually to encrypt Percona XtraDB Cluster traffic.

    Possible values:

    For more information, see SSL Automatic Configuration.

    "},{"location":"wsrep-system-index.html#pxc_maint_mode","title":"pxc_maint_mode","text":"Option Description Command Line: --pxc-maint-mode Config File: Yes Scope: Global Dynamic: Yes Default Value: DISABLED

    Specifies the maintenance mode for taking a node down without adjusting settings in ProxySQL.

    The following values are available:

    For more information, see Assisted Maintenance Mode.

    "},{"location":"wsrep-system-index.html#pxc_maint_transition_period","title":"pxc_maint_transition_period","text":"Option Description Command Line: --pxc-maint-transition-period Config File: Yes Scope: Global Dynamic: Yes Default Value: 10 (ten seconds)

    Defines the transition period when you change pxc_maint_mode to SHUTDOWN or MAINTENANCE. By default, the period is set to 10 seconds, which should be enough for most transactions to finish. You can increase the value to accommodate for longer-running transactions.

    For more information, see Assisted Maintenance Mode.

    "},{"location":"wsrep-system-index.html#pxc_strict_mode","title":"pxc_strict_mode","text":"Option Description Command Line: --pxc-strict-mode Config File: Yes Scope: Global Dynamic: Yes Default Value: ENFORCING or DISABLED

    Controls PXC Strict Mode, which runs validations to avoid the use of experimental and unsupported features in Percona XtraDB Cluster.

    Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:

    By default, pxc_strict_mode is set to ENFORCING, except if the node is acting as a standalone server or the node is bootstrapping, then pxc_strict_mode defaults to DISABLED.

    Note

    When changing the value of pxc_strict_mode from DISABLED or PERMISSIVE to ENFORCING or MASTER, ensure that the following configuration is used:

    The SERIALIZABLE method of isolation is not allowed in ENFORCING mode.

    For more information, see PXC Strict Mode.

    "},{"location":"wsrep-system-index.html#wsrep_applier_fk_checks","title":"wsrep_applier_FK_checks","text":"Option Description Command Line: --wsrep-applier-FK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_FK_checks variable is deprecated in favor of this variable.

    Defines whether foreign key checking is done for applier threads. This is enabled by default.

    See also

    MySQL wsrep option: wsrep_applier_FK_checks

    "},{"location":"wsrep-system-index.html#wsrep_applier_threads","title":"wsrep_applier_threads","text":"Option Description Command Line: --wsrep-applier-threads Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_threads variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads variable.

    Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.

    Note

    When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).

    If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1 to see if that resolves the issue. The default value can be increased for better throughput.

    You may want to increase it as suggested in Codership documentation for flow control: when the node is in JOINED state, increasing the number of replica threads can speed up the catchup to SYNCED.

    You can also estimate the optimal value for this from wsrep_cert_deps_distance as suggested in the Galera Cluster documentation.

    For more configuration tips, see Setting Parallel Slave Threads`.

    See also

    MySQL wsrep option: wsrep_applier_threads

    "},{"location":"wsrep-system-index.html#wsrep_applier_uk_checks","title":"wsrep_applier_UK_checks","text":"Option Description Command Line: --wsrep-applier-UK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_UK_checks variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks variable.

    Defines whether unique key checking is done for applier threads. This is disabled by default.

    See also

    MySQL wsrep option: wsrep_applier_UK_checks

    "},{"location":"wsrep-system-index.html#wsrep_auto_increment_control","title":"wsrep_auto_increment_control","text":"Option Description Command Line: --wsrep-auto-increment-control Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    Enables automatic adjustment of auto-increment system variables depending on the size of the cluster:

    This helps prevent auto-increment replication conflicts across the cluster by giving each node its own range of auto-increment values. It is enabled by default.

    Automatic adjustment may not be desirable depending on application\u2019s use and assumptions of auto-increments. It can be disabled in source-replica clusters.

    See also

    MySQL wsrep option: wsrep_auto_increment_control

    "},{"location":"wsrep-system-index.html#wsrep_causal_reads","title":"wsrep_causal_reads","text":"Option Description Command Line: --wsrep-causal-reads Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: OFF

    In some cases, the source may apply events faster than a replica, which can cause source and replica to become out of sync for a brief moment. When this variable is set to ON, the replica will wait until that event is applied before doing any other queries. Enabling this variable will result in larger latencies.

    Note

    This variable was deprecated because enabling it is the equivalent of setting wsrep_sync_wait to 1.

    See also

    MySQL wsrep option: wsrep_causal_reads

    "},{"location":"wsrep-system-index.html#wsrep_certification_rules","title":"wsrep_certification_rules","text":"Option Description Command Line: --wsrep-certification-rules Config File: Yes Scope: Global Dynamic: Yes Values: STRICT, OPTIMIZED Default Value: STRICT

    This variable controls how certification is done in the cluster, in particular this affects how foreign keys are handled.

    STRICT Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table may result in the certification failure.

    OPTIMIZED Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table will not result in the certification failure.

    See also

    Galera Cluster Documentation: MySQL wsrep options

    "},{"location":"wsrep-system-index.html#wsrep_certify_nonpk","title":"wsrep_certify_nonPK","text":"Option Description Command Line: --wsrep-certify-nonpk Config File: Yes Scope: Global Dynamic: No Default Value: ON

    Enables automatic generation of primary keys for rows that don\u2019t have them. Write set replication requires primary keys on all tables to allow for parallel applying of transactions. This variable is enabled by default. As a rule, make sure that all tables have primary keys.

    See also

    MySQL wsrep option: wsrep_certify_nonPK

    "},{"location":"wsrep-system-index.html#wsrep_cluster_address","title":"wsrep_cluster_address","text":"Option Description Command Line: --wsrep-cluster-address Config File: Yes Scope: Global Dynamic: Yes

    Defines the back-end schema, IP addresses, ports, and options that the node uses when connecting to the cluster. This variable needs to specify at least one other node\u2019s address, which is alive and a member of the cluster. In practice, it is best (but not necessary) to provide a complete list of all possible cluster nodes. The value should be of the following format:

    <schema>://<address>[?<option1>=<value1>[&<option2>=<value2>]],...\n

    The only back-end schema currently supported is gcomm. The IP address can contain a port number after a colon. Options are specified after ? and separated by &. You can specify multiple addresses separated by commas.

    For example:

    wsrep_cluster_address=\"gcomm://192.168.0.1:4567?gmcast.listen_addr=0.0.0.0:5678\"\n

    If an empty gcomm:// is provided, the node will bootstrap itself (that is, form a new cluster). It is not recommended to have empty cluster address in production config after the cluster has been bootstrapped initially. If you want to bootstrap a new cluster with a node, you should pass the --wsrep-new-cluster option when starting.

    See also

    MySQL wsrep option: wsrep_cluster_address

    "},{"location":"wsrep-system-index.html#wsrep_cluster_name","title":"wsrep_cluster_name","text":"Option Description Command Line: --wsrep-cluster-name Config File: Yes Scope: Global Dynamic: No Default Value: my_wsrep_cluster

    Specifies the name of the cluster and must be identical on all nodes. A node checks the value when attempting to connect to the cluster. If the names match, the node connects.

    Edit the value in the my.cnf in the [galera] section.

    [galera]\n\n    wsrep_cluster_name=simple-cluster\n

    Execute SHOW VARIABLES with the LIKE operator to view the variable:

    mysql> SHOW VARIABLES LIKE 'wsrep_cluster_name';\n
    Expected output
    +--------------------+----------------+\n| Variable_name      | Value          |\n+--------------------+----------------+\n| wsrep_cluster_name | simple-cluster |\n+--------------------+----------------+\n

    Note

    It should not exceed 32 characters. A node cannot join the cluster if the cluster names do not match. You must re-bootstrap the cluster after a name change.

    See also

    MySQL wsrep option: wsrep_cluster_name

    "},{"location":"wsrep-system-index.html#wsrep_data_home_dir","title":"wsrep_data_home_dir","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql (or whatever path is specified by datadir)

    Specifies the path to the directory where the wsrep provider stores its files (such as grastate.dat).

    See also

    MySQL wsrep option: wsrep_data_home_dir

    "},{"location":"wsrep-system-index.html#wsrep_dbug_option","title":"wsrep_dbug_option","text":"Option Description Command Line: --wsrep-dbug-option Config File: Yes Scope: Global Dynamic: Yes

    Defines DBUG options to pass to the wsrep provider.

    See also

    MySQL wsrep option: wsrep_dbug_option

    "},{"location":"wsrep-system-index.html#wsrep_debug","title":"wsrep_debug","text":"Option Description Command Line: --wsrep-debug Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE

    Enables debug level logging for the database server and wsrep-lib - an integration library for WSREP API with additional convenience for transaction processing. By default, --wsrep-debug variable is disabled.

    This variable can be used when trying to diagnose problems or when submitting a bug.

    You can set wsrep_debug in the following my.cnf groups:

    This variable may be set to one of the following values:

    NONE

    No debug-level messages.

    SERVER

    wsrep-lib general debug-level messages and detailed debug-level messages from the server_state part are printed out. Galera debug-level logs are printed out.

    TRANSACTION

    Same as SERVER + wsrep-lib transaction part

    STREAMING

    Same as TRANSACTION + wsrep-lib streaming part

    CLIENT

    Same as STREAMING + wsrep-lib client_service part

    Note

    Do not enable debugging in production environments, because it logs authentication info (that is, passwords).

    See also

    MySQL wsrep option: wsrep_debug

    "},{"location":"wsrep-system-index.html#wsrep_desync","title":"wsrep_desync","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    Defines whether the node should participate in Flow Control. By default, this variable is disabled, meaning that if the receive queue becomes too big, the node engages in Flow Control: it works through the receive queue until it reaches a more manageable size. For more information, see wsrep_local_recv_queue and wsrep_flow_control_interval.

    Enabling this variable will disable Flow Control for the node. It will continue to receive write-sets that it is not able to apply, the receive queue will keep growing, and the node will keep falling behind the cluster indefinitely.

    Toggling this back to OFF will require an IST or an SST, depending on how long it was desynchronized. This is similar to cluster desynchronization, which occurs during RSU TOI. Because of this, it\u2019s not a good idea to enable wsrep_desync for a long period of time or for several nodes at once.

    Note

    You can also desync a node using the /\\*! WSREP_DESYNC \\*/ query comment.

    See also

    MySQL wsrep option: wsrep_desync

    "},{"location":"wsrep-system-index.html#wsrep_dirty_reads","title":"wsrep_dirty_reads","text":"Option Description Command Line: --wsrep-dirty-reads Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: OFF

    Defines whether the node accepts read queries when in a non-operational state, that is, when it loses connection to the Primary Component. By default, this variable is disabled and the node rejects all queries, because there is no way to tell if the data is correct.

    If you enable this variable, the node will permit read queries (USE, SELECT, LOCK TABLE, and UNLOCK TABLES), but any command that modifies or updates the database on a non-operational node will still be rejected (including DDL and DML statements, such as INSERT, DELETE, and UPDATE).

    To avoid deadlock errors, set the wsrep_sync_wait variable to 0 if you enable wsrep_dirty_reads.

    As of Percona XtraDB Cluster 8.0.26-16, you can update the variable with a set_var hint.

    mysql> SELECT @@wsrep_dirty_reads;\n
    Expected output
    +-----------------------+\n| @@wsrep_dirty_reads   |\n+=======================+\n| OFF                   |\n+-----------------------+\n
    mysql> SELECT /*+ SET_VAR(wsrep_dirty_reads=ON) */ @@wsrep_dirty_reads;\n
    Expected output
    +-----------------------+\n| @@wsrep_dirty_reads   |\n+=======================+\n| ON                    |\n+-----------------------+\n

    See also

    MySQL wsrep option: wsrep_dirty_reads

    "},{"location":"wsrep-system-index.html#wsrep_drupal_282555_workaround","title":"wsrep_drupal_282555_workaround","text":"Option Description Command Line: --wsrep-drupal-282555-workaround Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    Enables a workaround for MySQL InnoDB bug that affects Drupal (Drupal bug #282555 and MySQL bug #41984). In some cases, duplicate key errors would occur when inserting the DEFAULT value into an AUTO_INCREMENT column.

    See also

    MySQL wsrep option: wsrep_drupal_282555_workaround

    "},{"location":"wsrep-system-index.html#wsrep_forced_binlog_format","title":"wsrep_forced_binlog_format","text":"Option Description Command Line: --wsrep-forced-binlog-format Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE

    Defines a binary log format that will always be effective, regardless of the client session binlog_format variable value.

    Possible values for this variable are:

    See also

    MySQL wsrep option: wsrep_forced_binlog_format

    "},{"location":"wsrep-system-index.html#wsrep_ignore_apply_errors","title":"wsrep_ignore_apply_errors","text":"Option Description Command Line: --wsrep-ignore-apply-errors Config File: Yes Scope: Global Dynamic: Yes Default Value: 0

    Defines the rules of wsrep applier behavior on errors. You can change the settings by editing the my.cnf file under [mysqld] or at runtime.

    Note

    In Percona XtraDB Cluster version 8.0.19-10, the default value has changed from 7 to 0. If you have been working with an earlier version of the PXC 8.0 series, you may see different behavior when upgrading to this version or later.

    The variable has the following options:

    Value Description WSREP_IGNORE_ERRORS_NONE All replication errors are treated as errors and will shutdown the node (default behavior) WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL DROP DATABASE, DROP TABLE, DROP INDEX, ALTER TABLE are converted to a warning if they result in ER_DB_DROP_EXISTS, ER_BAD_TABLE_ERROR OR ER_CANT_DROP_FIELD_OR_KEY errors WSREP_IGNORE_ERRORS_ON_RECONCILING_DML DELETE events are treated as warnings if they failed because the deleted row was not found (ER_KEY_NOT_FOUND) WSREP_IGNORE_ERRORS_ON_DDL All DDL errors will be treated as a warning WSREP_IGNORE_ERRORS_MAX Infers WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML and WSREP_IGNORE_ERRORS_ON_DDL

    Setting the variable between 0 and 7 has the following behavior:

    Setting Behavior 0 WSREP_IGNORE_ERRORS_NONE 1 WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL 2 WSREP_IGNORE_ERRORS_ON_RECONCILING_DML 3 WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML 4 WSREP_IGNORE_ERRORS_ON_DDL 5 WSREP_IGNORE_ERRORS_ON_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL 6 WSREP_IGNORE_ERRORS_ON_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML 7 WSREP_IGNORE_ERRORS_ON_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML, WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL"},{"location":"wsrep-system-index.html#wsrep_min_log_verbosity","title":"wsrep_min_log_verbosity","text":"Option Description Command Line: --wsrep-min-log-verbosity Config File: Yes Scope: Global Dynamic: Yes Default Value: 3

    This variable defines the minimum logging verbosity of wsrep/Galera and acts in conjunction with the log_error_verbosity variable. The wsrep_min_log_verbosity has the same values as log_error_verbosity.

    The actual log verbosity of wsrep/Galera can be greater than the value of wsrep_min_log_verbosity if log_error_verbosity is greater than wsrep_min_log_verbosity.

    A few examples:

    log_error_verbosity wsrep_min_log_verbosity MySQL Logs Verbosity wsrep Logs Verbosity 2 3 system error, warning system error, warning, info 1 3 system error system error, warning, info 1 2 system error system error, warning 3 1 system error, warning, info system error, warning, info

    Note the case where log_error_verbosity=3 and wsrep_min_log_verbosity=1. The actual log verbosity of wsrep/Galera is 3 (system error, warning, info) because log_error_verbosity is greater.

    See also

    MySQL Documentation: log_error_verbosity

    Galera Cluster Documentation: Database Server Logs

    "},{"location":"wsrep-system-index.html#wsrep_load_data_splitting","title":"wsrep_load_data_splitting","text":"Option Description Command Line: --wsrep-load-data-splitting Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    Defines whether the node should split large LOAD DATA transactions. This variable is enabled by default, meaning that LOAD DATA commands are split into transactions of 10 000 rows or less.

    If you disable this variable, then huge data loads may prevent the node from completely rolling the operation back in the event of a conflict, and whatever gets committed stays committed.

    Note

    It doesn\u2019t work as expected with autocommit=0 when enabled.

    See also

    MySQL wsrep option: wsrep_load_data_splitting

    "},{"location":"wsrep-system-index.html#wsrep_log_conflicts","title":"wsrep_log_conflicts","text":"Option Description Command Line: --wsrep-log-conflicts Config File: Yes Scope: Global Dynamic: No Default Value: OFF

    Defines whether the node should log additional information about conflicts. By default, this variable is disabled and Percona XtraDB Cluster uses standard logging features in MySQL.

    If you enable this variable, it will also log table and schema where the conflict occurred, as well as the actual values for keys that produced the conflict.

    See also

    MySQL wsrep option: wsrep_log_conflicts

    "},{"location":"wsrep-system-index.html#wsrep_max_ws_rows","title":"wsrep_max_ws_rows","text":"Option Description Command Line: --wsrep-max-ws-rows Config File: Yes Scope: Global Dynamic: Yes Default Value: 0 (no limit)

    Defines the maximum number of rows each write-set can contain.

    By default, there is no limit for the maximum number of rows in a write-set. The maximum allowed value is 1048576.

    See also

    MySQL wsrep option: wsrep_max_ws_rows

    "},{"location":"wsrep-system-index.html#wsrep_max_ws_size","title":"wsrep_max_ws_size","text":"Option Description Command Line: --wsrep_max_ws_size Config File: Yes Scope: Global Dynamic: Yes Default Value: 2147483647 (2 GB)

    Defines the maximum write-set size (in bytes). Anything bigger than the specified value will be rejected.

    You can set it to any value between 1024 and the default 2147483647.

    See also

    MySQL wsrep option: wsrep_max_ws_size

    "},{"location":"wsrep-system-index.html#wsrep_mode","title":"wsrep_mode","text":"Option Description Command Line: --wsrep-mode Config File: Yes Scope: Global Dynamic: Yes Default Value:

    This variable has been implemented in Percona XtraDB Cluster 8.0.31.

    Defines the node behavior according to a specified value. The value is empty or disabled by default.

    The available values are:

    See also

    MySQL wsrep option: wsrep_mode

    "},{"location":"wsrep-system-index.html#wsrep_node_address","title":"wsrep_node_address","text":"Option Description Command Line: --wsrep-node-address Config File: Yes Scope: Global Dynamic: No Default Value: IP of the first network interface (eth0) and default port (4567)

    Specifies the network address of the node. By default, this variable is set to the IP address of the first network interface (usually eth0 or enp2s0) and the default port (4567).

    While default value should be correct in most cases, there are situations when you need to specify it manually. For example:

    The value should be specified in the following format:

    <ip_address>[:port]\n

    Note

    The value of this variable is also used as the default value for the wsrep_sst_receive_address variable and the ist.recv_addr option.

    See also

    MySQL wsrep option: wsrep_node_address

    "},{"location":"wsrep-system-index.html#wsrep_node_incoming_address","title":"wsrep_node_incoming_address","text":"Option Description Command Line: --wsrep-node-incoming-address Config File: Yes Scope: Global Dynamic: No Default Value: AUTO

    Specifies the network address from which the node expects client connections. By default, it uses the IP address from wsrep_node_address and port number 3306.

    This information is used for the wsrep_incoming_addresses variable which shows all active cluster nodes.

    See also

    MySQL wsrep option: wsrep_node_incoming_address

    "},{"location":"wsrep-system-index.html#wsrep_node_name","title":"wsrep_node_name","text":"Option Description Command Line: --wsrep-node-name Config File: Yes Scope: Global Dynamic: Yes Default Value: The node\u2019s host name

    Defines a unique name for the node. Defaults to the host name.

    In many situations, you may use the value of this variable as a means to identify the given node in the cluster as the alternative to using the node address (the value of the wsrep_node_address).

    Note

    The variable wsrep_sst_donor is an example where you may only use the value of wsrep_node_name and the node address is not permitted.

    "},{"location":"wsrep-system-index.html#wsrep_notify_cmd","title":"wsrep_notify_cmd","text":"Option Description Command Line: --wsrep-notify-cmd Config File: Yes Scope: Global Dynamic: No

    Specifies the notification command that the node should execute whenever cluster membership or local node status changes. This can be used for alerting or to reconfigure load balancers.

    Note

    The node will block and wait until the command or script completes and returns before it can proceed. If the script performs any potentially blocking or long-running operations, such as network communication, you should consider initiating such operations in the background and have the script return immediately.

    See also

    MySQL wsrep option: wsrep_notify_cmd

    "},{"location":"wsrep-system-index.html#wsrep_on","title":"wsrep_on","text":"Option Description Command Line: No Config File: No Scope: Session Dynamic: Yes Default Value: ON

    Defines if current session transaction changes for a node are replicated to the cluster.

    If set to OFF for a session, no transaction changes are replicated in that session. The setting does not cause the node to leave the cluster, and the node communicates with other nodes.

    See also

    MySQL wsrep option: wsrep_on

    "},{"location":"wsrep-system-index.html#wsrep_osu_method","title":"wsrep_OSU_method","text":"Option Description Command Line: --wsrep-OSU-method Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: TOI

    Defines the method for Online Schema Upgrade that the node uses to replicate DDL statements.

    For information on the available methods, see Online Schema upgrade and for information on Non-blocking operations, see NBO.

    See also

    MySQL wsrep option: wsrep_OSU_method

    "},{"location":"wsrep-system-index.html#wsrep_provider","title":"wsrep_provider","text":"Option Description Command Line: --wsrep-provider Config File: Yes Scope: Global Dynamic: No

    Specifies the path to the Galera library. This is usually /usr/lib64/libgalera_smm.so on CentOS/RHEL and /usr/lib/libgalera_smm.so on Debian/Ubuntu.

    If you do not specify a path or the value is not valid, the node will behave as standalone instance of MySQL.

    See also

    MySQL wsrep option: wsrep_provider

    "},{"location":"wsrep-system-index.html#wsrep_provider_options","title":"wsrep_provider_options","text":"Option Description Command Line: --wsrep-provider-options Config File: Yes Scope: Global Dynamic: No

    Specifies optional settings for the replication provider documented in Index of :variable:`wsrep_provider` options. These options affect how various situations are handled during replication.

    See also

    MySQL wsrep option: wsrep_provider_options

    "},{"location":"wsrep-system-index.html#wsrep_recover","title":"wsrep_recover","text":"Option Description Command Line: --wsrep-recover Config File: Yes Scope: Global Dynamic: No Default Value: OFF Location: mysqld_safe`

    Recovers database state after crash by parsing GTID from the log. If the GTID is found, it will be assigned as the initial position for server.

    "},{"location":"wsrep-system-index.html#wsrep_reject_queries","title":"wsrep_reject_queries","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE

    Defines whether the node should reject queries from clients. Rejecting queries can be useful during upgrades, when you want to keep the node up and apply write-sets without accepting queries.

    When a query is rejected, the following error is returned:

    Error 1047: Unknown command\n

    The following values are available:

    Note

    This variable doesn\u2019t affect Galera replication in any way, only the applications that connect to the database are affected. If you want to desync a node, use wsrep_desync.

    See also

    MySQL wsrep option: wsrep_reject_queries

    "},{"location":"wsrep-system-index.html#wsrep_replicate_myisam","title":"wsrep_replicate_myisam","text":"Option Description Command Line: --wsrep-replicate-myisam Config File: Yes Scope: Session, Global Dynamic: No Default Value: OFF

    Defines whether DML statements for MyISAM tables should be replicated. It is disabled by default, because MyISAM replication is still experimental.

    On the global level, wsrep_replicate_myisam can be set only during startup. On session level, you can change it during runtime as well.

    For older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended when wsrep_replicate_myisam is disabled, since if any table in the list is MyISAM, the whole DDL statement is not put under TOI.

    Note

    You should keep in mind the following when using MyISAM replication:

    "},{"location":"wsrep-system-index.html#wsrep_restart_replica","title":"wsrep_restart_replica","text":"Option Description Command Line: --wsrep-restart-replica Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave variable is deprecated in favor of this variable.

    Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.

    See also

    MySQL wsrep option: wsrep_restart_slave

    "},{"location":"wsrep-system-index.html#wsrep_restart_slave","title":"wsrep_restart_slave","text":"Option Description Command Line: --wsrep-restart-slave Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave variable is deprecated and may be removed in later versions. Use wsrep_restart_replica.

    Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.

    "},{"location":"wsrep-system-index.html#wsrep_retry_autocommit","title":"wsrep_retry_autocommit","text":"Option Description Command Line: --wsrep-retry-autocommit Config File: Yes Scope: Global Dynamic: No Default Value: 1

    Specifies the number of times autocommit transactions will be retried in the cluster if it encounters certification errors. In case there is a conflict, it should be safe for the cluster node to simply retry the statement without returning an error to the client, hoping that it will pass next time.

    This can be useful to help an application using autocommit to avoid deadlock errors that can be triggered by replication conflicts.

    If this variable is set to 0, autocommit transactions won\u2019t be retried.

    See also

    MySQL wsrep option: wsrep_retry_autocommit

    "},{"location":"wsrep-system-index.html#wsrep_rsu_commit_timeout","title":"wsrep_RSU_commit_timeout","text":"Option Description Command Line: --wsrep-RSU-commit-timeout Config File: Yes Scope: Global Dynamic: Yes Default Value: 5000 Range: From 5000 (5 milliseconds) to 31536000000000 (365 days)

    Specifies the timeout in microseconds to allow active connection to complete COMMIT action before starting RSU.

    While running RSU it is expected that user has isolated the node and there is no active traffic executing on the node. RSU has a check to ensure this, and waits for any active connection in COMMIT state before starting RSU.

    By default this check has timeout of 5 milliseconds, but in some cases COMMIT is taking longer. This variable sets the timeout, and has allowed values from the range of (5 milliseconds, 365 days). The value is to be set in microseconds. Unit of variable is in micro-secs so set accordingly.

    Note

    RSU operation will not auto-stop node from receiving active traffic. So there could be a continuous flow of active traffic while RSU continues to wait, and that can result in RSU starvation. User is expected to block active RSU traffic while performing operation.

    "},{"location":"wsrep-system-index.html#wsrep_slave_fk_checks","title":"wsrep_slave_FK_checks","text":"Option Description Command Line: --wsrep-slave-FK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_FK_checks variable.

    Defines whether foreign key checking is done for applier threads. This is enabled by default.

    "},{"location":"wsrep-system-index.html#wsrep_slave_threads","title":"wsrep_slave_threads","text":"Option Description Command Line: --wsrep-slave-threads Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads variable.

    Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.

    Note

    When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).

    If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1 to see if that resolves the issue. The default value can be increased for better throughput.

    You may want to increase it as suggested in Codership documentation for flow control: when the node is in JOINED state, increasing the number of replica threads can speed up the catchup to SYNCED.

    You can also estimate the optimal value for this from wsrep_cert_deps_distance as suggested in the Galera Cluster documentation.

    For more configuration tips, see this document.

    "},{"location":"wsrep-system-index.html#wsrep_slave_uk_checks","title":"wsrep_slave_UK_checks","text":"Option Description Command Line: --wsrep-slave-UK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks variable.

    Defines whether unique key checking is done for applier threads. This is disabled by default.

    "},{"location":"wsrep-system-index.html#wsrep_sr_store","title":"wsrep_SR_store","text":"Option Description Command Line: --wsrep-sr-store Config File: Yes Scope: Global Dynamic: No Default Value: table

    Defines storage for streaming replication fragments. The available values are table, the default value, and none, which disables the variable.

    "},{"location":"wsrep-system-index.html#wsrep_sst_allowed_methods","title":"wsrep_sst_allowed_methods","text":"Option Description Command Line: --wsrep_sst_allowed_methods Config File: Yes Scope: Global Dynamic: No Default Value: xtrabackup-v2

    Percona XtraDB Cluster 8.0.20-11.3 adds this variable.

    This variable limits SST methods accepted by the server for wsrep_sst_method variable. The default value is xtrabackup-v2.

    "},{"location":"wsrep-system-index.html#wsrep_sst_donor","title":"wsrep_sst_donor","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes

    Specifies a list of nodes (using their wsrep_node_name values) that the current node should prefer as donors for SST and IST.

    Warning

    Using IP addresses of nodes instead of node names (the value of wsrep_node_name) as values of wsrep_sst_donor results in an error.

    ERROR] WSREP: State transfer request failed unrecoverably: 113 (No route\nto host). Most likely it is due to inability to communicate with the\ncluster primary component. Restart required.\n

    If the value is empty, the first node in SYNCED state in the index becomes the donor and will not be able to serve requests during the state transfer.

    To consider other nodes if the listed nodes are not available, add a comma at the end of the list, for example:

    wsrep_sst_donor=node1,node2,\n

    If you remove the trailing comma from the previous example, then the joining node will consider only node1 and node2.

    Note

    By default, the joiner node does not wait for more than 100 seconds to receive the first packet from a donor. This is implemented via the sst-initial-timeout option. If you set the list of preferred donors without the trailing comma or believe that all nodes in the cluster can often be unavailable for SST (this is common for small clusters), then you may want to increase the initial timeout (or disable it completely if you don\u2019t mind the joiner node waiting for the state transfer indefinitely).

    See also

    MySQL wsrep option: wsrep_sst_donor

    "},{"location":"wsrep-system-index.html#wsrep_sst_method","title":"wsrep_sst_method","text":"Option Description Command Line: --wsrep-sst-method Config File: Yes Scope: Global Dynamic: Yes Default Value: xtrabackup-v2

    Defines the method or script for State Snapshot Transfer (SST).

    Available values are:

    Note

    xtrabackup-v2 provides support for clusters with GTIDs and async replicas.

    See also

    MySQL wsrep option: wsrep_sst_method

    "},{"location":"wsrep-system-index.html#wsrep_sst_receive_address","title":"wsrep_sst_receive_address","text":"Option Description Command Line: --wsrep-sst-receive-address Config File: Yes Scope: Global Dynamic: Yes Default Value: AUTO

    Specifies the network address where donor node should send state transfers. By default, this variable is set to AUTO, meaning that the IP address from wsrep_node_address is used.

    See also

    MySQL wsrep option: wsrep_sst_receive_address

    "},{"location":"wsrep-system-index.html#wsrep_start_position","title":"wsrep_start_position","text":"Option Description Command Line: --wsrep-start-position Config File: Yes Scope: Global Dynamic: Yes Default Value: 00000000-0000-0000-0000-00000000000000:-1

    Specifies the node\u2019s start position as UUID:seqno. By setting all the nodes to have the same value for this variable, the cluster can be set up without the state transfer.

    See also

    MySQL wsrep option: wsrep_start_position

    "},{"location":"wsrep-system-index.html#wsrep_sync_wait","title":"wsrep_sync_wait","text":"Option Description Command Line: --wsrep-sync-wait Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: 0

    Controls cluster-wide causality checks on certain statements. Checks ensure that the statement is executed on a node that is fully synced with the cluster.

    As of Percona XtraDB Cluster 8.0.26-16, you are able to update the variable with a set_var hint.

       mysql> SELECT @@wsrep_sync_wait;\n
    Expected output
    +---------------------+\n| @@wsrep_sync_wait   |\n+=====================+\n| 3                   |\n+---------------------+\n
       mysql> SELECT /*+ SET_VAR(wsrep_sync_wait=7) */ @@wsrep_sync_wait;\n
    Expected output
    +---------------------+\n| @@wsrep_sync_wait   |\n+=====================+\n| 7                   |\n+---------------------+\n

    Note

    Causality checks of any type can result in increased latency.

    The type of statements to undergo checks is determined by bitmask:

    Note

    Setting wsrep_sync_wait to 1 is the equivalent of setting the deprecated wsrep_causal_reads to ON.

    See also

    MySQL wsrep option: wsrep_sync_wait

    "},{"location":"wsrep-system-index.html#wsrep_trx_fragment_size","title":"wsrep_trx_fragment_size","text":"Option Description Command Line: --wsrep-trx-fragment-size Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: 0

    Defines the the streaming replication fragment size. This variable is measured in the value defined by wsrep_trx_fragment_unit. The minimum value is 0 and the maximum value is 2147483647.

    As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.

    mysql> SELECT @@@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_unit    |\n+==============================+\n| statements                   |\n+------------------------------+\n| @@wsrep_trx_fragment_size    |\n+------------------------------+\n| 3                            |\n+------------------------------+\n
    mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_size=5) */ @@wsrep_trx_fragment_size;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_size    |\n+==============================+\n| 5                            |\n+------------------------------+\n

    You can also use set_var() in a data manipulation language (DML) statement. This ability is useful when streaming large statements within a transaction.

    node1> BEGIN;\nQuery OK, 0 rows affected (0.00 sec)\n\nnode1> INSERT /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ INTO t1 SELECT * FROM t1; \nQuery OK, 65536 rows affected (15.15 sec)\nRecords: 65536 Duplicates: 0 Warnings: 0\n\nnode1> UPDATE /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ t1 SET i=2;\nQuery OK, 131072 rows affected (1 min 35.93 sec)\nRows matched: 131072 Changed: 131072 Warnings: 0\n\nnode2> SET SESSION TRANSACTION_ISOLATION = 'READ-UNCOMMITTED';\nQuery OK, 0 rows affected (0.00 sec)\n\nnode2> SELECT * FROM t1 LIMIT 5;\n+---+\n| i |\n+===+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\nnode1> DELETE  /*+SET_VAR(wsrep_trx_fragment_size = 10000)*/ FROM t1;\nQuery OK, 131072 rows affected (15.09 sec)\n
    "},{"location":"wsrep-system-index.html#wsrep_trx_fragment_unit","title":"wsrep_trx_fragment_unit","text":"Option Description Command Line: --wsrep-trx-fragment-unit Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: \u201cbytes\u201d

    Defines the type of measure for the wsrep_trx_fragment_size. The possible values are: bytes, rows, statements.

    As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.

    mysql> SELECT @@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_unit    |\n+==============================+\n| statements                   |\n+------------------------------+\n| @@wsrep_trx_fragment_size    |\n+------------------------------+\n| 3                            |\n+------------------------------+\n
    mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_unit=rows) */ @@wsrep_trx_fragment_unit;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_unit    |\n+==============================+\n| rows                         |\n+------------------------------+\n
    "},{"location":"wsrep-system-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"xtrabackup-sst.html","title":"Percona XtraBackup SST configuration","text":"

    Percona XtraBackup SST works in two stages:

    1. First it identifies the type of data transfer based on the presence of xtrabackup_ist file on the joiner node.

    2. Then it starts data transfer. In case of SST, it empties the data directory except for some files (galera.cache, sst_in_progress, grastate.dat) and then proceeds with SST.

      In case of IST, it proceeds as before.

    "},{"location":"xtrabackup-sst.html#sst-options","title":"SST options","text":"

    The following options specific to SST can be used in my.cnf under [sst].

    Note

    "},{"location":"xtrabackup-sst.html#streamfmt","title":"streamfmt","text":"Parameter Description Values: xbstream Default: xbstream Match: Yes

    Used to specify the Percona XtraBackup streaming format. The only option is the xbstream format. SST fails and generates an error when another format, such as tar, is used.

    For more information about the xbstream format, see The xbstream Binary.

    "},{"location":"xtrabackup-sst.html#transferfmt","title":"transferfmt","text":"Parameter Description Values: socat, nc Default: socat Match: Yes

    Used to specify the data transfer format. The recommended value is the default transferfmt=socat because it allows for socket options, such as transfer buffer sizes. For more information, see socat(1).

    Note

    Using transferfmt=nc does not support the SSL-based encryption mode (value 4 for the encrypt option).

    "},{"location":"xtrabackup-sst.html#ssl-ca","title":"ssl-ca","text":"

    Example: ssl-ca=/etc/ssl/certs/mycert.crt

    Specifies the absolute path to the certificate authority (CA) file for socat encryption based on OpenSSL.

    "},{"location":"xtrabackup-sst.html#ssl-cert","title":"ssl-cert","text":"

    Example: ssl-cert=/etc/ssl/certs/mycert.pem

    Specifies the full path to the certificate file in the PEM format for socat encryption based on OpenSSL.

    Note

    For more information about ssl-ca and ssl-cert, see https://www.dest-unreach.org/socat/doc/socat-openssltunnel.html. The ssl-ca is essentially a self-signed certificate in that example, and ssl-cert is the PEM file generated after concatenation of the key and the certificate generated earlier. The names of options were chosen to be compatible with socat parameter names as well as with MySQL\u2019s SSL authentication. For testing you can also download certificates from launchpad.

    Note

    Irrespective of what is shown in the example, you can use the same .crt and .pem files on all nodes and it will work, since there is no server-client paradigm here, but rather a cluster with homogeneous nodes.

    "},{"location":"xtrabackup-sst.html#ssl-key","title":"ssl-key","text":"

    Example: ssl-key=/etc/ssl/keys/key.pem

    Used to specify the full path to the private key in PEM format for socat encryption based on OpenSSL.

    "},{"location":"xtrabackup-sst.html#encrypt","title":"encrypt","text":"Parameter Description Values: 0, 4 Default: 4 Match: Yes

    Enables SST encryption mode in Percona XtraBackup:

    Considering that you have all three necessary files:

    [sst]\nencrypt=4\nssl-ca=ca.pem\nssl-cert=server-cert.pem\nssl-key=server-key.pem\n

    For more information, see Encrypting PXC Traffic.

    "},{"location":"xtrabackup-sst.html#sockopt","title":"sockopt","text":"

    Used to specify key/value pairs of socket options, separated by commas, for example:

    [sst]\nsockopt=\"retry=2,interval=3\"\n

    The previous example causes socat to try to connect three times (initial attempt and two retries with a 3-second interval between attempts).

    This option only applies when socat is used (transferfmt=socat). For more information about socket options, see socat (1).

    Note

    You can also enable SSL based compression with sockopt. This can be used instead of the Percona XtraBackup compress option.

    "},{"location":"xtrabackup-sst.html#ncsockopt","title":"ncsockopt","text":"

    Used to specify socket options for the netcat transfer format (transferfmt=nc).

    "},{"location":"xtrabackup-sst.html#progress","title":"progress","text":"

    Values: 1, path/to/file

    Used to specify where to write SST progress. If set to 1, it writes to MySQL stderr. Alternatively, you can specify the full path to a file. If this is a FIFO, it needs to exist and be open on reader end before itself, otherwise wsrep_sst_xtrabackup will block indefinitely.

    Note

    Value of 0 is not valid.

    "},{"location":"xtrabackup-sst.html#rebuild","title":"rebuild","text":"Parameter Description Values: 0, 1 Default: 0

    Used to enable rebuilding of index on joiner node. This is independent of compaction, though compaction enables it. Rebuild of indexes may be used as an optimization.

    Note

    #1192834 affects this option.

    "},{"location":"xtrabackup-sst.html#time","title":"time","text":"Parameter Description Values: 0, 1 Default: 0

    Enabling this option instruments key stages of backup and restore in SST.

    "},{"location":"xtrabackup-sst.html#rlimit","title":"rlimit","text":"

    Example: rlimit=128k

    Used to set a a ratelimit in bytes. Add a suffix (k, m, g, t) to specify units. For example, 128k is 128 kilobytes. For more information, see pv(1).

    Note

    Rate is limited on donor node. The rationale behind this is to not allow SST to saturate the donor\u2019s regular cluster operations or to limit the rate for other purposes.

    "},{"location":"xtrabackup-sst.html#use_extra","title":"use_extra","text":"Parameter Description Values: 0, 1 Default: 0

    Used to force SST to use the thread pool\u2019s extra_port. Make sure that thread pool is enabled and the extra_port option is set in my.cnf before you enable this option.

    "},{"location":"xtrabackup-sst.html#cpat","title":"cpat","text":"

    Default: '.\\*\\\\.pem$\\\\|.\\*init\\\\.ok$\\\\|.\\*galera\\\\.cache$\\\\|.\\*sst_in_progress$\\\\|.\\*\\\\.sst$\\\\|.\\*gvwstate\\\\.dat$\\\\|.\\*grastate\\\\.dat$\\\\|.\\*\\\\.err$\\\\|.\\*\\\\.log$\\\\|.\\*RPM_UPGRADE_MARKER$\\\\|.\\*RPM_UPGRADE_HISTORY$'

    Used to define the files that need to be retained in the datadir before running SST, so that the state of the other node can be restored cleanly.

    For example:

    [sst]\ncpat='.*galera\\.cache$\\|.*sst_in_progress$\\|.*grastate\\.dat$\\|.*\\.err$\\|.*\\.log$\\|.*RPM_UPGRADE_MARKER$\\|.*RPM_UPGRADE_HISTORY$\\|.*\\.xyz$'\n

    Note

    This option can only be used when wsrep_sst_method is set to xtrabackup-v2 (which is the default value).

    "},{"location":"xtrabackup-sst.html#compressor","title":"compressor","text":"Parameter Description Default: not set (disabled) Example: compressor=\u2019zstd -T0 -2\u2019"},{"location":"xtrabackup-sst.html#decompressor","title":"decompressor","text":"Parameter Description Default: not set (disabled) Example: decompressor=\u2019zstd -T0 -dc\u2019

    Stream-based compression and decompression are performed on the stream, in contrast to performing decompression after streaming to disk, which involves additional I/O. The savings are considerable, up to half the I/O on the JOINER node.

    You can use any compression utility which works on stream: gzip, pigz, zstd, and others. The pigz or zstd options are multi-threaded. At a minimum, the compressor must be set on the DONOR and the decompressor on JOINER.

    You must install the related binaries, otherwise SST aborts.

    compressor=\u2019pigz\u2019 decompressor=\u2019pigz -dc\u2019

    compressor=\u2019gzip\u2019 decompressor=\u2019gzip -dc\u2019

    To revert to the XtraBackup-based compression, set compress under [xtrabackup]. You can define both the compressor and the decompressor, although you will be wasting CPU cycles.

    [xtrabackup]\ncompress\n\n-- compact has led to some crashes\n
    "},{"location":"xtrabackup-sst.html#inno-backup-opts","title":"inno-backup-opts","text":""},{"location":"xtrabackup-sst.html#inno-apply-opts","title":"inno-apply-opts","text":""},{"location":"xtrabackup-sst.html#inno-move-opts","title":"inno-move-opts","text":"Parameter Description Default: Empty Type: Quoted String

    This group of options is used to pass XtraBackup options for backup, apply, and move stages. The SST script doesn\u2019t alter, tweak, or optimize these options.

    Note

    Although these options are related to XtraBackup SST, they cannot be specified in my.cnf, because they are for passing innobackupex options.

    "},{"location":"xtrabackup-sst.html#sst-initial-timeout","title":"sst-initial-timeout","text":"Parameter Description Default: 100 Unit: seconds

    This option is used to configure initial timeout (in seconds) to receive the first packet via SST. This has been implemented, so that if the donor node fails somewhere in the process, the joiner node will not hang up and wait forever.

    By default, the joiner node will not wait for more than 100 seconds to get a donor node. The default should be sufficient, however, it is configurable, so you can set it appropriately for your cluster. To disable initial SST timeout, set sst-initial-timeout=0.

    Note

    If you are using wsrep_sst_donor, and you want the joiner node to strictly wait for donors listed in the variable and not fall back (that is, without a terminating comma at the end), and there is a possibility of all nodes in that variable to be unavailable, disable initial SST timeout or set it to a higher value (maximum threshold that you want the joiner node to wait). You can also disable this option (or set it to a higher value) if you believe all other nodes in the cluster can potentially become unavailable at any point in time (mostly in small clusters) or there is a high network latency or network disturbance (which can cause donor selection to take longer than 100 seconds).

    "},{"location":"xtrabackup-sst.html#sst-idle-timeout","title":"sst-idle-timeout","text":"Parameter Description Default: 120 Unit: seconds

    This option configures the time the SST operation waits on the joiner to receive more data. The size of the joiner\u2019s sst directory is checked for the amount of data received. For example, the directory has received 50MB of data. The operation rechecks the data size after the default value, 120 seconds, has elapsed. If the data size is still 50MB, this operation is aborted. If the data has increased, the operation continues.

    An example of setting the option:

    [sst]\nsst-idle-timeout=0\n
    "},{"location":"xtrabackup-sst.html#tmpdir","title":"tmpdir","text":"Parameter Description Default: Empty Unit: /path/to/tmp/dir

    This option specifies the location for storing the temporary file on a donor node where the transaction log is stored before streaming or copying it to a remote host.

    Note

    This option can be used on joiner node to specify non-default location to receive temporary SST files. This location must be large enough to hold the contents of the entire database. If tmpdir is empty then default location datadir/.sst will be used.

    The tmpdir option can be set in the following my.cnf groups:

    wsrep_debug

    Specifies whether additional debugging output for the database server error log should be enabled. Disabled by default.

    This option can be set in the following my.cnf groups:

    "},{"location":"xtrabackup-sst.html#encrypt_threads","title":"encrypt_threads","text":"Parameter Description Default: 4

    Specifies the number of threads that XtraBackup should use for encrypting data (when encrypt=1). The value is passed using the --encrypt-threads option in XtraBackup.

    This option affects only SST with XtraBackup and should be specified under the [sst] group.

    "},{"location":"xtrabackup-sst.html#backup_threads","title":"backup_threads","text":"Parameter Description Default: 4

    Specifies the number of threads that XtraBackup should use to create backups. See the --parallel option in XtraBackup.

    This option affects only SST with XtraBackup and should be specified under the [sst] group.

    "},{"location":"xtrabackup-sst.html#xtrabackup-sst-dependencies","title":"XtraBackup SST dependencies","text":"

    Each suppored version of Percona XtraDB Cluster is tested against a specific version of Percona XtraBackup:

    Other combinations are not guaranteed to work.

    The following are optional dependencies of Percona XtraDB Cluster introduced by wsrep_sst_xtrabackup-v2 (except for obvious and direct dependencies):

    "},{"location":"xtrabackup-sst.html#xtrabackup-based-encryption","title":"XtraBackup-based encryption","text":"

    Settings related to XtraBackup-based Encryption are no longer allowed in PXC 8.0 when used for SST. If it is detected that XtraBackup-based Encryption is enabled, PXC will produce an error.

    The XtraBackup-based Encryption is enabled when you specify any of the following options under [xtrabackup] in my.cnf:

    "},{"location":"xtrabackup-sst.html#memory-allocation","title":"Memory allocation","text":"

    The amount of memory for XtraBackup is defined by the --use-memory option. You can pass it using the inno-apply-opts option under [sst] as follows:

    [sst]\ninno-apply-opts=\"--use-memory=500M\"\n

    If it is not specified, the use-memory option under [xtrabackup] will be used:

    [xtrabackup]\nuse-memory=32M\n

    If neither of the above are specified, the size of the InnoDB memory buffer will be used:

    [mysqld]\ninnodb_buffer_pool_size=24M\n
    "},{"location":"xtrabackup-sst.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"xtradb-cluster-version-numbers.html","title":"Understand version numbers","text":"

    A version number identifies the product release. The product contains the latest Generally Available (GA) features at the time of that release.

    8.0.20 -11. 2 Base version Minor build Custom build

    Percona uses semantic version numbering, which follows the pattern of base version, minor build, and an optional custom build. Percona assigns unique, non-negative integers in increasing order for each minor build release. The version number combines the base Percona Server for MySQL version number, the minor build version, and the custom build version, if needed.

    The version numbers for Percona XtraDB Cluster 8.0.20-11.2 define the following information:

    "},{"location":"xtradb-cluster-version-numbers.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"yum.html","title":"Install Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS","text":"

    A list of the supported platforms by products and versions is available in Percona Software and Platform Lifecycle.

    We gather Telemetry data in the Percona packages and Docker images.

    You can install Percona XtraDB Cluster with the following methods:

    This documentation describes using the Percona Software repositories.

    "},{"location":"yum.html#prerequisites","title":"Prerequisites","text":"

    Installing Percona XtraDB Cluster requires that you either are logged in as a user with root privileges or can run commands with sudo.

    Percona XtraDB Cluster requires the specific ports for communication. Make sure that the following ports are available:

    For information on SELinux, see Enabling SELinux.

    "},{"location":"yum.html#install-from-percona-software-repository","title":"Install from Percona Software Repository","text":"

    For more information on the Percona Software repositories and configuring Percona Repositories with percona-release, see the Percona Software Repositories Documentation.

    Install on Red Hat 7Install on Red Hat 8 or later
    $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release enable-only pxc-80 release\n$ sudo percona-release enable tools release\n$ sudo yum install percona-xtradb-cluster\n
    $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release setup pxc-80\n$ sudo yum install percona-xtradb-cluster\n
    "},{"location":"yum.html#after-installation","title":"After installation","text":"

    After the installation, start the mysql service and find the temporary password using the grep command.

    $ sudo service mysql start\n$ sudo grep 'temporary password' /var/log/mysqld.log\n

    Use the temporary password to log into the server:

    $ mysql -u root -p\n

    Run an ALTER USER statement to change the temporary password, exit the client, and stop the service.

    mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass';\nmysql> exit\n$ sudo service mysql stop\n
    "},{"location":"yum.html#next-steps","title":"Next steps","text":"

    Configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.

    "},{"location":"yum.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.29-21.html","title":"Percona XtraDB Cluster 8.0.29-21 (2022-09-12)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.29-21.html#release-highlights","title":"Release Highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.29-21 (2022-08-08) release notes.

    The improvements and bug fixes for MySQL 8.0.29, provided by Oracle, and included in Percona Server for MySQL are the following:

    The Performance Schema tracks if a query was processed on the PRIMARY engine, InnoDB, or a SECONDARY engine, HeatWave. An EXECUTION_ENGINE column, which indicates the engine used, was added to the Performance Schema statement event tables and the sys.processlist and the sys.x$processlist views.

    Added support for the IF NOT EXISTS option for the CREATE FUNCTION, CREATE PROCEDURE, and CREATE TRIGGER statements.

    Added support for ALTER TABLE \u2026 DROP COLUMN ALGORITHM=INSTANT.

    An anonymous user with the PROCESS privilege was unable to select processlist table rows.

    Find the full list of bug fixes and changes in the MySQL 8.0.29 Release Notes.

    Note

    Percona Server for MySQL has changed the default for the supported DDL column operations to ALGORITHM=INPLACE. This change fixes the corruption issue with the INSTANT ADD/DROP COLUMNS (find more details in PS-8292.

    In MySQL 8.0.29, the default setting for supported DDL operations is ALGORITHM=INSTANT. You can explicitly specify ALGORITHM=INSTANT in DDL column operations.

    "},{"location":"release-notes/8.0.29-21.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/8.0.29-21.html#packaging-notes","title":"Packaging Notes","text":"

    Debian 9 is no longer supported.

    "},{"location":"release-notes/8.0.29-21.html#useful-links","title":"Useful Links","text":""},{"location":"release-notes/8.0.29-21.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.30-22.html","title":"Percona XtraDB Cluster 8.0.30-22.md (2022-12-28)","text":"Release date December 28, 2022 Install instructions Install Percona XtraDB Cluster Download this version Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    For paid support, managed services or consulting services, contact Percona Sales.

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.30-22.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.30-22 (2022-11-21) release notes.

    Note

    The following Percona Server for MySQL 8.0.30 features are not supported in this version of Percona XtraDB Cluster:

    The features will be supported in the next version of Percona XtraDB Cluster.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.30 and included in Percona Server for MySQL are the following:

    Find the full list of bug fixes and changes in the MySQL 8.0.30 release notes.

    "},{"location":"release-notes/8.0.30-22.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.30-22.html#platform-support","title":"Platform support","text":""},{"location":"release-notes/8.0.30-22.html#useful-links","title":"Useful links","text":""},{"location":"release-notes/8.0.30-22.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.31-23.2.html","title":"Percona XtraDB Cluster 8.0.31-23.2 (2023-04-04)","text":"Release date April 04, 2023 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.31-23.2.html#release-highlights","title":"Release highlights","text":"

    This release of Percona XtraDB Cluster 8.0.31-23 includes the fix to the security vulnerability CVE-2022-25834 with PXB-2977.

    "},{"location":"release-notes/8.0.31-23.2.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.31-23.2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.31-23.html","title":"Percona XtraDB Cluster 8.0.31-23 (2023-03-14)","text":"Release date 2024-04-03 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.31-23.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.31-23 (2022-11-21) release notes.

    This release adds the following feature in tech preview:

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.31 and included in Percona Server for MySQL are the following:

    Find the full list of bug fixes and changes in the MySQL 8.0.31 Release Notes.

    "},{"location":"release-notes/8.0.31-23.html#new-features","title":"New Features","text":""},{"location":"release-notes/8.0.31-23.html#improvement","title":"Improvement","text":""},{"location":"release-notes/8.0.31-23.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.31-23.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.31-23.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.32-24.2.html","title":"Percona XtraDB Cluster 8.0.32-24.2 (2023-05-24)","text":"Release date May 24, 2023 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.32-24.2.html#release-highlights","title":"Release highlights","text":"

    This release of Percona XtraDB Cluster 8.0.32-24 includes the fix for PXC-4211.

    "},{"location":"release-notes/8.0.32-24.2.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.32-24.2.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.32-24.2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.32-24.html","title":"Percona XtraDB Cluster 8.0.32-24 (2023-04-18)","text":"Release date April 18, 2023 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.32-24.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.32-24 (2023-03-20) release notes.

    Percona decided to revert the following MySQL bug fix:

    The data and the GTIDs backed up by mysqldump were inconsistent when the options --single-transaction and --set-gtid-purged=ON were both used. It was because in between the transaction started by mysqldump and the fetching of GTID_EXECUTED, GTIDs on the server could have increased already. With this fixed, a FLUSH TABLES WITH READ LOCK is performed before the fetching of GTID_EXECUTED to ensure its value is consistent with the snapshot taken by mysqldump.

    The MySQL fix also added a requirement when using \u2013single-transaction and executing FLUSH TABLES WITH READ LOCK for the RELOAD privilege. (MySQL bug #109701, MySQL bug #105761)

    The Percona Server version of the mysqldump utility, in some modes, can be used with MySQL Server. This utility provides a temporary workaround for the \u201cadditional RELOAD privilege\u201d limitation introduced by Oracle MySQL Server 8.0.32.

    For more information, see the Percona Performance Blog A Workaround for the \u201cRELOAD/FLUSH_TABLES privilege required\u201d Problem When Using Oracle mysqldump 8.0.32.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.32 and included in Percona Server for MySQL are the following:

    "},{"location":"release-notes/8.0.32-24.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.32-24.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.32-24.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.33-25.html","title":"Percona XtraDB Cluster 8.0.33-25 (2023-08-02)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.33-25.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.33-25 (2023-06-15) release notes.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.33 and included in Percona XtraDB Cluster are the following:

    The support for user-defined collations will be removed in a future releases of MySQL.

    Find the full list of bug fixes and changes in the MySQL 8.0.33 Release Notes.

    "},{"location":"release-notes/8.0.33-25.html#new-features","title":"New features","text":""},{"location":"release-notes/8.0.33-25.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.33-25.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.33-25.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.33-25.upd.html","title":"Percona XtraDB Cluster 8.0.33-25 Update (2023-08-25)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.33-25.upd.html#known-issues","title":"Known issues","text":"

    If you use Galera Arbitrator (garbd), we recommend that you do not upgrade to 8.0.33 because garbd-8.0.33 may cause synchronization issues and extensive usage of CPU resources.

    If you already upgraded to garbd-8.0.33, we recommended downgrading to garbd-8.0.32-24-2 by performing the following steps:

    "},{"location":"release-notes/8.0.33-25.upd.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now

    "},{"location":"release-notes/8.0.33-25.upd.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.34-26.html","title":"Percona XtraDB Cluster 8.0.34-26 (2023-11-01)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.34-26.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.34-26 (2023-09-26) release notes.

    Percona XtraDB Cluster implements telemetry that fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the Telemetry on Percona XtraDB Cluster document.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.34 and included in Percona XtraDB Cluster are the following:

    "},{"location":"release-notes/8.0.34-26.html#deprecations-and-removals","title":"Deprecations and removals","text":"

    Find the full list of bug fixes and changes in the MySQL 8.0.34 Release Notes.

    "},{"location":"release-notes/8.0.34-26.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.34-26.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.34-26.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.35-27.html","title":"Percona XtraDB Cluster 8.0.35-27 (2024-01-17)","text":"

    Get started with Quickstart Guide for Percona XtraDB Cluster.

    Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.35-27.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.35-27 (2023-12-27) release notes.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.35 and included in Percona XtraDB Cluster are the following:

    "},{"location":"release-notes/8.0.35-27.html#deprecations","title":"Deprecations","text":"

    A future release may remove deprecated variables and options. The usage of these deprecated items may cause a warning. We recommend migrating from deprecated variables and options as soon as possible.

    This release deprecates the following variables and options:

    Find the full list of bug fixes and changes in the MySQL 8.0.35 Release Notes.

    "},{"location":"release-notes/8.0.35-27.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.35-27.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.35-27.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.36-28.html","title":"Percona XtraDB Cluster 8.0.36-28 (2024-04-03)","text":"

    Get started with Quickstart Guide for Percona XtraDB Cluster.

    Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.36-28.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.36-28 (2024-03-04) release notes.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.36 and included in Percona XtraDB Cluster are the following:

    Find the complete list of bug fixes and changes in the MySQL 8.0.36 Release Notes.

    "},{"location":"release-notes/8.0.36-28.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.36-28.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.36-28.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html","title":"Percona XtraDB Cluster 8.0.18-9.3","text":"

    Percona XtraDB Cluster 8.0.18-9.3 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.18-9 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#known-issues","title":"Known Issues","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html","title":"Percona XtraDB Cluster 8.0.19-10","text":"

    Percona XtraDB Cluster 8.0.19-10 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.19-10 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#known-issues","title":"Known Issues","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html","title":"Percona XtraDB Cluster 8.0.20-11.2","text":"

    This release fixes the security vulnerability CVE-2020-15180

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html","title":"Percona XtraDB Cluster 8.0.20-11.3","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html","title":"Percona XtraDB Cluster 8.0.20-11","text":"

    Percona XtraDB Cluster 8.0.20-11 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.20-11 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html","title":"Percona XtraDB Cluster 8.0.21-12.1","text":"

    Percona XtraDB Cluster 8.0.21-12.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.21-12 for more details on these changes.

    Implement an inconsistency voting policy. In the best case scenario, the node with the inconsistent data is aborted and the cluster continues to operate.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html","title":"Percona XtraDB Cluster 8.0.22-13.1","text":"

    Percona XtraDB Cluster 8.0.22-13.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.22-13 for more details on these changes.

    This release fixes security vulnerability CVE-2021-27928, a similar issue to CVE-2020-15180

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html","title":"Percona XtraDB Cluster 8.0.23-14.1","text":"

    Percona XtraDB Cluster 8.0.23-14.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.23-14 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html","title":"Percona XtraDB Cluster 8.0.25-15.1","text":"

    Percona XtraDB Cluster 8.0.25-15.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.25-15 for more details on these changes.

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#release-highlights","title":"Release Highlights","text":"

    A Non-Blocking Operation method for online schema changes in Percona XtraDB Cluster. This mode is similar to the Total Order Isolation (TOI) mode, whereas a data definition language (DDL) statement (for example, ALTER) is executed on all nodes in sync. The difference is that in the NBO mode, the DDL statement acquires a metadata lock that locks the table or schema at a late stage of the operation, which is a more efficient locking strategy.

    Note that the NBO mode is a Tech Preview feature. We do not recommend that you use this mode in a production environment. For more information, see Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU).

    The notable changes and bug fixes introduced by Oracle MySQL include the following:

    For more information, see the MySQL 8.0.24 Release Notes and the MySQL 8.0.25 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#new-features","title":"New Features","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html","title":"Percona XtraDB Cluster 8.0.26-16.1","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#release-highlights","title":"Release Highlights","text":"

    The following are a number of the notable fixes for MySQL 8.0.26, provided by Oracle, and included in this release:

    In an upgrade from an earlier version to 8.0.26, enable the rpl_semi_sync_source plugin and the rpl_semi_sync_replica plugin after the upgrade has been completed. Enabling these plugins before all of the nodes are upgraded may cause data inconsistency between the nodes.

    For the source, the rpl_semi_sync_master plugin (seminsync_master.so library) is the old version and the rpl_semi_sync_source plugin(semisync_source.so library) is the new version.

    For the client, the rpl_semi_sync_slave plugin (semisync_slave.so library) is the old version and the rpl_semi_sync_replica plugin (semisync_replica.so library) is the new version

    For more information, see the MySQL 8.0.26 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html","title":"Percona XtraDB Cluster 8.0.27-18.1","text":"

    Date: April 11, 2022

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#release-highlights","title":"Release Highlights","text":"

    The following lists a number of the bug fixes for MySQL 8.0.27, provided by Oracle, and included in Percona Server for MySQL:

    Find the full list of bug fixes and changes in the MySQL 8.0.27 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#useful-links","title":"Useful Links","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html","title":"Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#release-highlights","title":"Release Highlights","text":"

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.28 and included in Percona Server for MySQL are the following:

    Find the full list of bug fixes and changes in the MySQL 8.0.28 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#useful-links","title":"Useful Links","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/release-notes_index.html","title":"Percona XtraDB Cluster 8.0 release notes index","text":""},{"location":"release-notes/release-notes_index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Percona XtraDB Cluster 8.0 Documentation","text":"

    This documentation is for the latest release: Percona XtraDB Cluster 8.0.36-28 (Release Notes).

    Percona XtraDB Cluster is a database clustering solution for MySQL. It ensures high availability, prevents downtime and data loss, and provides linear scalability for a growing environment.

    "},{"location":"index.html#features-of-percona-xtradb-cluster","title":"Features of Percona XtraDB Cluster","text":"Feature Details Synchronous replication Data is written to all nodes simultaneously, or not written at all in case of a failure even on a single node Multi-source replication Any node can trigger a data update. True parallel replication Multiple threads on replica performing replication on row level Automatic node provisioning You simply add a node and it automatically syncs. Data consistency No more unsynchronized nodes. PXC Strict Mode Avoids the use of tech preview features and unsupported features Configuration script for ProxySQL Percona XtraDB Cluster includes the proxysql-admin tool that automatically configures Percona XtraDB Cluster nodes using ProxySQL. Automatic configuration of SSL encryption Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic variable that enables automatic configuration of SSL encryption Optimized Performance Percona XtraDB Cluster performance is optimized to scale with a growing production workload

    Percona XtraDB Cluster 8.0 is fully compatible with MySQL Server Community Edition 8.0 and Percona Server for MySQL 8.0. The cluster has the following compatibilities:

    See also

    Overview of changes in the most recent PXC release

    "},{"location":"index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"add-node.html","title":"Add nodes to cluster","text":"

    New nodes that are properly configured are provisioned automatically. When you start a node with the address of at least one other running node in the wsrep_cluster_address variable, this node automatically joins and synchronizes with the cluster.

    Note

    Any existing data and configuration will be overwritten to match the data and configuration of the DONOR node. Do not join several nodes at the same time to avoid overhead due to large amounts of traffic when a new node joins.

    Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer and the wsrep_sst_method variable is always set to xtrabackup-v2.

    "},{"location":"add-node.html#start-the-second-node","title":"Start the second node","text":"

    Start the second node using the following command:

    [root@pxc2 ~]# systemctl start mysql\n

    After the server starts, it receives SST automatically.

    To check the status of the second node, run the following:

    mysql@pxc2> show status like 'wsrep%';\n
    Expected output
    +----------------------------------+--------------------------------------------------+\n| Variable_name                    | Value                                            |\n+----------------------------------+--------------------------------------------------+\n| wsrep_local_state_uuid           | a08247c1-5807-11ea-b285-e3a50c8efb41             |\n| ...                              | ...                                              |\n| wsrep_local_state                | 4                                                |\n| wsrep_local_state_comment        | Synced                                           |\n| ...                              |                                                  |\n| wsrep_cluster_size               | 2                                                |\n| wsrep_cluster_status             | Primary                                          |\n| wsrep_connected                  | ON                                               |\n| ...                              | ...                                              |\n| wsrep_provider_capabilities      | :MULTI_MASTER:CERTIFICATION: ...                 |\n| wsrep_provider_name              | Galera                                           |\n| wsrep_provider_vendor            | Codership Oy <info@codership.com>                |\n| wsrep_provider_version           | 4.3(r752664d)                                    |\n| wsrep_ready                      | ON                                               |\n| ...                              | ...                                              | \n+----------------------------------+--------------------------------------------------+\n75 rows in set (0.00 sec)\n

    The output of SHOW STATUS shows that the new node has been successfully added to the cluster. The cluster size is now 2 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.

    If the state of the second node is Synced as in the previous example, then the node received full SST is synchronized with the cluster, and you can proceed to add the next node.

    Note

    If the state of the node is Joiner, it means that SST hasn\u2019t finished. Do not add new nodes until all others are in Synced state.

    "},{"location":"add-node.html#starting-the-third-node","title":"Starting the Third Node","text":"

    To add the third node, start it as usual:

    [root@pxc3 ~]# systemctl start mysql\n

    To check the status of the third node, run the following:

    mysql@pxc3> show status like 'wsrep%';\n

    The output shows that the new node has been successfully added to the cluster. Cluster size is now 3 nodes, it is the primary component, and it is fully connected and ready to receive write-set replication.

    Expected output
    +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ...                        | ...                                  |\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n| ...                        | ...                                  |\n| wsrep_cluster_size         | 3                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n| ...                        | ...                                  |\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"add-node.html#next-steps","title":"Next steps","text":"

    When you add all nodes to the cluster, you can verify replication by running queries and manipulating data on nodes to see if these changes are synchronized across the cluster.

    "},{"location":"add-node.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"apparmor.html","title":"Enable AppArmor","text":"

    Percona XtraDB Cluster contains several AppArmor profiles. Multiple profiles allow for easier maintenance because the mysqld profile is decoupled from the SST script profile. This separation allows the introduction of other SST methods or scripts with their own profiles.

    The following profiles are available:

    The mysqld profile allows the execution of the SST script in PUx mode with the /{usr/}bin/wsrep_sst_*PUx command. The profile is applied if the script contains a profile. The SST script runs in unconfined mode if the script does not contain a profile. The system administrator can change the execution mode to Pix. This action causes a fallback to inherited mode in case the SST script profile is absent.

    "},{"location":"apparmor.html#profile-adjustments","title":"Profile adjustments","text":"

    The mysqld profile and the SST script profile can be adjusted, such as moving the data directory, in the same way as modifying the mysqld profile in Percona Server.

    "},{"location":"apparmor.html#work-with-pxc_encrypt_cluster_traffic","title":"Work with pxc_encrypt_cluster_traffic","text":"

    By default, the pxc_encrypt_cluster_traffic is ON, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.

    Set up the certificates describes the certificate setup.

    The following AppArmor profile rule grants access to certificates located in /etc/mysql/certs. You must be root or have sudo privileges.

    # Allow config access\n  /etc/mysql/** r,\n

    This rule is present in both profiles (usr.sbin.mysqld and usr.bin.wsrep_sst_xtrabackup-v2). The rule allows the administrator to store the certificates anywhere inside of the /etc/mysql/ directory. If the certificates are located outside of the specified directory, you must add an additional rule which allows access to the certificates in both profiles. The rule must have the path to the certificates location, like the following:

    # Allow config access\n  /path/to/certificates/* r,\n

    The server certificates must be accessible to the mysql user and are readable only by this user.

    "},{"location":"apparmor.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"apt.html","title":"Install Percona XtraDB Cluster on Debian or Ubuntu","text":"

    Specific information on the supported platforms, products, and versions is described in Percona Software and Platform Lifecycle.

    The packages are available in the official Percona software repository and on the download page. It is recommended to install Percona XtraDB Cluster from the official repository using APT.

    We gather Telemetry data in the Percona packages and Docker images.

    "},{"location":"apt.html#prerequisites","title":"Prerequisites","text":"

    See also

    For more information, see Enabling AppArmor.

    "},{"location":"apt.html#install-from-repository","title":"Install from Repository","text":"
    1. Update the sytem:

      sudo apt update\n
    2. Install the necessary packages:

      sudo apt install -y wget gnupg2 lsb-release curl\n
    3. Download the repository package

      wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb\n
    4. Install the package with dpkg:

      sudo dpkg -i percona-release_latest.generic_all.deb\n
    5. Refresh the local cache to update the package information:

      sudo apt update\n
    6. Enable the release repository for Percona XtraDB Cluster:

      sudo percona-release setup pxc80\n
    7. Install the cluster:

      sudo apt install -y percona-xtradb-cluster\n

    During the installation, you are requested to provide a password for the root user on the database node.

    Note

    If needed, you could also install the percona-xtradb-cluster-full meta-package, which includes the following additional packages:

    "},{"location":"apt.html#next-steps","title":"Next steps","text":"

    After you install Percona XtraDB Cluster and stop the mysql service, configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.

    "},{"location":"apt.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"bootstrap.html","title":"Bootstrap the first node","text":"

    After you configure all PXC nodes, initialize the cluster by bootstrapping the first node. The initial node must contain all the data that you want to be replicated to other nodes.

    Bootstrapping implies starting the first node without any known cluster addresses: if the wsrep_cluster_address variable is empty, Percona XtraDB Cluster assumes that this is the first node and initializes the cluster.

    Instead of changing the configuration, start the first node using the following command:

    [root@pxc1 ~]# systemctl start mysql@bootstrap.service\n

    When you start the node using the previous command, it runs in bootstrap mode with wsrep_cluster_address=gcomm://. This tells the node to initialize the cluster with wsrep_cluster_conf_id variable set to 1. After you add other nodes to the cluster, you can then restart this node as normal, and it will use standard configuration again.

    Note

    A service started with mysql@bootstrap must be stopped using the same command. For example, the systemctl stop mysql command does not stop an instance started with the mysql@bootstrap command.

    To make sure that the cluster has been initialized, run the following:

    mysql@pxc1> show status like 'wsrep%';\n

    The output shows that the cluster size is 1 node, it is the primary component, the node is in the Synced state, it is fully connected and ready for write-set replication.

    Expected output
    +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n| ...                        | ...                                  |\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n| ...                        | ...                                  |\n| wsrep_cluster_size         | 1                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n| ...                        | ...                                  |\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"bootstrap.html#next-steps","title":"Next steps","text":"

    After initializing the cluster, you can add other nodes.

    "},{"location":"bootstrap.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"certification.html","title":"Certification in Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster replicates actions executed on one node to all other nodes in the cluster, and makes it fast enough to appear as if it is synchronous (virtually synchronous).

    The following types of actions exist:

    Note

    This manual page assumes the reader is aware of TOI and MySQL replication protocol.

    DML (INSERT, UPDATE, and DELETE) operations effectively change the state of the database, and all such operations are recorded in XtraDB by registering a unique object identifier (key) for each change (an update or a new addition).

    This ensures that there is quick and short meta information about the rows that this transaction has touched or modified. This information is passed on as part of the write-set for certification to all the nodes in the cluster while the transaction is in the commit phase.

    Changes made to database objects are bin-logged. This is similar to how MySQL does it for replication with its Source-Replica ecosystem, except that a packet of changes from a given transaction is created and named as a write-set.

    Once the client/user issues a COMMIT, Percona XtraDB Cluster will run a commit hook. Commit hooks ensure the following:

    There is an inherent assumption/protocol enforcement that all nodes read the packet from a channel in the same order, and that way even though each packet doesn\u2019t carry id information, it is inherently established using the locally maintained id value.

    "},{"location":"certification.html#common-situation","title":"Common situation","text":"

    The following example shows what happens in a common situation. act_id is incremented and assigned only for totally ordered actions, and only in primary state (skip messages while in state exchange).

    $ rcvd->id = ++group->act_id_;\n

    Note

    This is an amazing way to solve the problem of the id coordination in multi-source systems. Otherwise a node will have to first get an id from central system or through a separate agreed protocol, and then use it for the packet, thereby doubling the round-trip time.

    "},{"location":"certification.html#conflicts","title":"Conflicts","text":"

    The following happens if two nodes get ready with their packet at same time:

    1. Both nodes will be allowed to put the packet on the channel. That means the channel will see packets from different nodes queued one behind another.

    2. The following example shows what happens if two nodes modify same set of rows. Nodes are in sync until this point:

      $ create -> insert (1,2,3,4)\n
      • Node 1: update i = i + 10;

      • Node 2: update i = i + 100;

      Let\u2019s associate transaction ID (trx-id) for an update transaction that is executed on Node 1 and Node 2 in parallel. Although the real algorithm is more involved (with uuid + seqno), it is conceptually the same, so we are using trx_id.

      • Node 1: update action: trx-id=n1x

      • Node 2: update action: trx-id=n2x

      Both node packets are added to the channel, but the transactions are conflicting. The protocol says: FIRST WRITE WINS.

      So in this case, whoever is first to write to the channel will get certified. Let\u2019s say Node 2 is first to write the packet, and then Node 1 makes changes immediately after it.

      Note

      Each node subscribes to all packages, including its own package.

      • Node 2 will see its own packet and will process it. Then it will see the packet from Node 1, try to certify it, and fail.

      • Node 1 will see the packet from Node 2 and will process it.

      Note

      InnoDB allows isolation, so Node 1 can process packets from Node 2 independent of Node 1 transaction changes

      Then Node 1 will see its own packet, try to certify it, and fail.

      Note

      Even though the packet originated from Node 1, it will undergo certification to catch cases like these.

    "},{"location":"certification.html#resolve-certification-conflicts","title":"Resolve certification conflicts","text":"

    The certification protocol can be described using the previous example. The central certification vector (CCV) is updated to reflect reference transaction.

    Node 2 then gets the packet from Node 1 for certification. The packet key is already present in CCV, with the reference transaction set it to n2x, whereas write-set proposes setting it to n1x. This causes a conflict, which in turn causes the transaction from Node 1 to fail the certification test.

    Using the same case as explained above, Node 1 certification also rejects the packet from Node 1.

    This suggests that the node doesn\u2019t need to wait for certification to complete, but just needs to ensure that the packet is written to the channel. The applier transaction will always win and the local conflicting transaction will be rolled back.

    The following example shows what happens if one of the nodes has local changes that are not synced with the group:

    mysql> create (id primary key) -> insert (1), (2), (3), (4);\n
    Expected output
    node-1: wsrep_on=0; insert (5); wsrep_on=1\nnode-2: insert(5).\n

    The insert(5) statement will generate a write-set that will then be replicated to Node 1. Node 1 will try to apply it but will fail with duplicate-key-error, because 5 already exist.

    XtraDB will flag this as an error, which would eventually cause Node 1 to shutdown.

    "},{"location":"certification.html#increment-gtid","title":"Increment GTID","text":"

    GTID is incremented only when the transaction passes certification, and is ready for commit. That way errant packets don\u2019t cause GTID to increment.

    Also, group packet id is not confused with GTID. Without errant packets, it may seem that these two counters are the same, but they are not related.

    "},{"location":"certification.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"compile.html","title":"Compile and install from Source Code","text":"

    If you want to compile Percona XtraDB Cluster, you can find the source code on GitHub. Before you begin, make sure that the following packages are installed:

    apt yum Git git git SCons scons scons GCC gcc gcc g++ g++ gcc-c++ OpenSSL openssl openssl Check check check CMake cmake cmake Bison bison bison Boost libboost-all-dev boost-devel Asio libasio-dev asio-devel Async I/O libaio-dev libaio-devel ncurses libncurses5-dev ncurses-devel Readline libreadline-dev readline-devel PAM libpam-dev pam-devel socat socat socat curl libcurl-dev libcurl-devel

    You will likely have all or most of the packages already installed. If you are not sure, run one of the following commands to install any missing dependencies:

    To compile Percona XtraDB Cluster from source code:

    1. Clone the Percona XtraDB Cluster repository:

      $ git clone https://github.com/percona/percona-xtradb-cluster.git\n

      Important

      Clone the latest repository or update it to the latest state. Old codebase may not be compatible with the build script.

    2. Check out the 8.0 branch and initialize submodules:

      $ cd percona-xtradb-cluster\n$ git checkout 8.0\n$ git submodule update --init --recursive\n
    3. Download the matching Percona XtraBackup 8.0 tarball (*.tar.gz) for your operating system from Percona Downloads.

    The following example extract the Percona XtraBackup 8.0.32-25 tar.gz file to the target directory ./pxc-build:

    ```{.bash data-prompt=\"$\"}\n$ tar -xvf percona-xtrabackup-8.0.32-25-Linux-x86_64.glibc2.17.tar.gz -C ./pxc-build\n```\n
    1. Run the build script ./build-ps/build-binary.sh. By default, it attempts building into the current directory. Specify the target output directory, such as ./pxc-build:

      $ mkdir ./pxc-build\n$ ./build-ps/build-binary.sh ./pxc-build\n

    When the compilation completes, pxc-build contains a tarball, such as Percona-XtraDB-Cluster-8.0.x86_64.tar.gz, that you can deploy on your system.

    Note

    The exact version and release numbers may differ.

    "},{"location":"compile.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"configure-cluster-rhel.html","title":"Configure a cluster on Red Hat-based distributions","text":"

    This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Red Hat or CentOS 7 servers, using the packages from Percona repositories.

    "},{"location":"configure-cluster-rhel.html#prerequisites","title":"Prerequisites","text":"

    The procedure described in this tutorial requires the following:

    Different from previous versions

    The variable wsrep_sst_auth has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user mysql.pxc.internal.session. During SST, the user mysql.pxc.sst.user and the role mysql.pxc.sst.role are created on the donor node.

    "},{"location":"configure-cluster-rhel.html#step-1-installing-pxc","title":"Step 1. Installing PXC","text":"

    Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux or CentOS.

    "},{"location":"configure-cluster-rhel.html#step-2-configuring-the-first-node","title":"Step 2. Configuring the first node","text":"

    Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.

    1. Make sure that the configuration file /etc/my.cnf on the first node (percona1) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended.\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 1 address\nwsrep_node_address=192.168.70.71\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n
    2. Start the first node with the following command:

      [root@percona1 ~] # systemctl start mysql@bootstrap.service\n

      The previous command will start the cluster with initial wsrep_cluster_address variable set to gcomm://. If the node or MySQL are restarted later, there will be no need to change the configuration file.

    3. After the first node has been started, cluster status can be checked with the following command:

      mysql> show status like 'wsrep%';\n

      This output shows that the cluster has been successfully bootstrapped.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 1                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n

      Copy the automatically generated temporary password for the superuser account:

      $ sudo grep 'temporary password' /var/log/mysqld.log\n

      Use this password to log in as root:

      $ mysql -u root -p\n

      Change the password for the superuser account and log out. For example:

      mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'r00tP@$$';\n
      Expected output
      Query OK, 0 rows affected (0.00 sec)\n
    "},{"location":"configure-cluster-rhel.html#step-3-configuring-the-second-node","title":"Step 3. Configuring the second node","text":"
    1. Make sure that the configuration file /etc/my.cnf on the second node (percona2) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node 2 address\nwsrep_node_address=192.168.70.72\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the second node with the following command:

      [root@percona2 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on both nodes. The following is an example of status from the second node (percona2):

      mysql> show status like 'wsrep%';\n

      The output shows that the new node has been successfully added to the cluster.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 2                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-rhel.html#step-4-configuring-the-third-node","title":"Step 4. Configuring the third node","text":"
    1. Make sure that the MySQL configuration file /etc/my.cnf on the third node (percona3) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB auto_increment locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.73\n\n# Cluster name\nwsrep_cluster_name=my_centos_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the third node with the following command:

      [root@percona3 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all three nodes. The following is an example of status from the third node (percona3):

      mysql> show status like 'wsrep%';\n

      The output confirms that the third node has joined the cluster.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 3                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-rhel.html#testing-replication","title":"Testing replication","text":"

    To test replication, lets create a new database on second node, create a table for that database on the third node, and add some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@percona2> CREATE DATABASE percona;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Switch to a newly created database:

      mysql@percona3> USE percona;\n

      The following output confirms that a database has been changed:

      Expected output
      Database changed\n
    3. Create a table on the third node:

      mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)\n
    4. Insert records on the first node:

      mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)\n
    5. Retrieve all the rows from that table on the second node:

      mysql@percona2> SELECT * FROM percona.example;\n

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n|       1 | percona1  |\n+---------+-----------+\n1 row in set (0.00 sec)\n

      This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.

    "},{"location":"configure-cluster-rhel.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"configure-cluster-ubuntu.html","title":"Configure a cluster on Debian or Ubuntu","text":"

    This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Ubuntu 14 LTS servers, using the packages from Percona repositories.

    "},{"location":"configure-cluster-ubuntu.html#prerequisites","title":"Prerequisites","text":"

    The procedure described in this tutorial requires he following:

    "},{"location":"configure-cluster-ubuntu.html#step-1-install-pxc","title":"Step 1. Install PXC","text":"

    Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Debian or Ubuntu.

    Note

    Debian/Ubuntu installation prompts for root password. For this tutorial, set it to Passw0rd. After the packages have been installed, mysqld will start automatically. Stop mysqld on all three nodes using sudo systemctl stop mysql.

    "},{"location":"configure-cluster-ubuntu.html#step-2-configure-the-first-node","title":"Step 2. Configure the first node","text":"

    Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.

    1. Make sure that the configuration file /etc/mysql/my.cnf for the first node (pxc1) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains the IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #1 address\nwsrep_node_address=192.168.70.61\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n
    2. Start the first node with the following command:

      [root@pxc1 ~]# systemctl start mysql@bootstrap.service\n

      This command will start the first node and bootstrap the cluster.

    3. After the first node has been started, cluster status can be checked with the following command:

      mysql> show status like 'wsrep%';\n

      The following outut shows the cluste status:

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 1                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n75 rows in set (0.00 sec)\n

      This output shows that the cluster has been successfully bootstrapped.

    To perform State Snapshot Transfer using XtraBackup, set up a new user with proper privileges:

    mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';\nmysql@pxc1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';\nmysql@pxc1> FLUSH PRIVILEGES;\n

    Note

    MySQL root account can also be used for performing SST, but it is more secure to use a different (non-root) user for this.

    "},{"location":"configure-cluster-ubuntu.html#step-3-configure-the-second-node","title":"Step 3. Configure the second node","text":"
    1. Make sure that the configuration file /etc/mysql/my.cnf on the second node (pxc2) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #2 address\nwsrep_node_address=192.168.70.62\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the second node with the following command:

      [root@pxc2 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of status from the second node (pxc2):

      mysql> show status like 'wsrep%';\n

      The following output shows that the new node has been successfully added to the cluster.

      Expected output
      +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 2                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-ubuntu.html#step-4-configure-the-third-node","title":"Step 4. Configure the third node","text":"
    1. Make sure that the MySQL configuration file /etc/mysql/my.cnf on the third node (pxc3) contains the following:

      [mysqld]\n\ndatadir=/var/lib/mysql\nuser=mysql\n\n# Path to Galera library\nwsrep_provider=/usr/lib/libgalera_smm.so\n\n# Cluster connection URL contains IPs of node#1, node#2 and node#3\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n\n# Using the MyISAM storage engine is not recommended\ndefault_storage_engine=InnoDB\n\n# This InnoDB autoincrement locking mode is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n\n# Node #3 address\nwsrep_node_address=192.168.70.63\n\n# Cluster name\nwsrep_cluster_name=my_ubuntu_cluster\n\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    2. Start the third node with the following command:

      [root@pxc3 ~]# systemctl start mysql\n
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all nodes. The following is an example of status from the third node (pxc3):

    mysql> show status like 'wsrep%';\n

    The following output confirms that the third node has joined the cluster.

    Expected output
    +----------------------------+--------------------------------------+\n| Variable_name              | Value                                |\n+----------------------------+--------------------------------------+\n| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |\n...\n| wsrep_local_state          | 4                                    |\n| wsrep_local_state_comment  | Synced                               |\n...\n| wsrep_cluster_size         | 3                                    |\n| wsrep_cluster_status       | Primary                              |\n| wsrep_connected            | ON                                   |\n...\n| wsrep_ready                | ON                                   |\n+----------------------------+--------------------------------------+\n40 rows in set (0.01 sec)\n
    "},{"location":"configure-cluster-ubuntu.html#test-replication","title":"Test replication","text":"

    To test replication, lets create a new database on the second node, create a table for that database on the third node, and add some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@percona2> CREATE DATABASE percona;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Switch to a newly created database:

      mysql@percona3> USE percona;\n

      The following output confirms that a database has been changed:

      Expected output
      Database changed\n
    3. Create a table on the third node:

      mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)\n
    4. Insert records on the first node:

      mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');\n

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)\n
    5. Retrieve all the rows from that table on the second node:

      mysql@percona2> SELECT * FROM percona.example;\n

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n|       1 | percona1  |\n+---------+-----------+\n1 row in set (0.00 sec)\n

      This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.

    "},{"location":"configure-cluster-ubuntu.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"configure-nodes.html","title":"Configure nodes for write-set replication","text":"

    After installing Percona XtraDB Cluster on each node, you need to configure the cluster. In this section, we will demonstrate how to configure a three node cluster:

    Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63
    1. Stop the Percona XtraDB Cluster server. After the installation completes the server is not started. You need this step if you have started the server manually.

      $ sudo service mysql stop\n
    2. Edit the configuration file of the first node to provide the cluster settings.

      If you use Debian or Ubuntu, edit /etc/mysql/mysql.conf.d/mysqld.cnf:

      wsrep_provider=/usr/lib/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n

      If you use Red Hat or CentOS, edit /etc/my.cnf. Note that on these systems you set the wsrep_provider option to a different value:

      wsrep_provider=/usr/lib64/galera4/libgalera_smm.so\nwsrep_cluster_name=pxc-cluster\nwsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63\n
    3. Configure node 1.

      wsrep_node_name=pxc1\nwsrep_node_address=192.168.70.61\npxc_strict_mode=ENFORCING\n
    4. Set up node 2 and node 3 in the same way: Stop the server and update the configuration file applicable to your system. All settings are the same except for wsrep_node_name and wsrep_node_address.

      For node 2

      wsrep_node_name=pxc2\nwsrep_node_address=192.168.70.62\n

      For node 3

      wsrep_node_name=pxc3\nwsrep_node_address=192.168.70.63\n
    5. Set up the traffic encryption settings. Each node of the cluster must use the same SSL certificates.

      [mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n

    Important

    In Percona XtraDB Cluster 8.0, the Encrypting Replication Traffic is enabled by default (via the pxc-encrypt-cluster-traffic variable).

    The replication traffic encryption cannot be enabled on a running cluster. If it was disabled before the cluster was bootstrapped, the cluster must to stopped. Then set up the encryption, and bootstrap (see Bootstrapping the First Node) again.

    See also

    More information about the security settings in Percona XtraDB Cluster * Security Basics * Encrypting PXC Traffic * SSL Automatic Configuration

    "},{"location":"configure-nodes.html#template-of-the-configuration-file","title":"Template of the configuration file","text":"

    Here is an example of a full configuration file installed on CentOS to /etc/my.cnf.

    # Template my.cnf for PXC\n# Edit to your requirements.\n[client]\nsocket=/var/lib/mysql/mysql.sock\n[mysqld]\nserver-id=1\ndatadir=/var/lib/mysql\nsocket=/var/lib/mysql/mysql.sock\nlog-error=/var/log/mysqld.log\npid-file=/var/run/mysqld/mysqld.pid\n# Binary log expiration period is 604800 seconds, which equals 7 days\nbinlog_expire_logs_seconds=604800\n######## wsrep ###############\n# Path to Galera library\nwsrep_provider=/usr/lib64/galera4/libgalera_smm.so\n# Cluster connection URL contains IPs of nodes\n#If no IP is found, this implies that a new cluster needs to be created,\n#in order to do that you need to bootstrap this node\nwsrep_cluster_address=gcomm://\n# In order for Galera to work correctly binlog format should be ROW\nbinlog_format=ROW\n# Slave thread to use\nwsrep_slave_threads=8\nwsrep_log_conflicts\n# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera\ninnodb_autoinc_lock_mode=2\n# Node IP address\n#wsrep_node_address=192.168.70.63\n# Cluster name\nwsrep_cluster_name=pxc-cluster\n#If wsrep_node_name is not specified,  then system hostname will be used\nwsrep_node_name=pxc-cluster-node-1\n#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER\npxc_strict_mode=ENFORCING\n# SST method\nwsrep_sst_method=xtrabackup-v2\n
    "},{"location":"configure-nodes.html#next-steps-bootstrap-the-first-node","title":"Next Steps: Bootstrap the first node","text":"

    After you configure all your nodes, initialize Percona XtraDB Cluster by bootstrapping the first node according to the procedure described in Bootstrapping the First Node.

    "},{"location":"configure-nodes.html#essential-configuration-variables","title":"Essential configuration variables","text":"

    wsrep_provider

    Specify the path to the Galera library. The location depends on the distribution:

    wsrep_cluster_name

    Specify the logical name for your cluster. It must be the same for all nodes in your cluster.

    wsrep_cluster_address

    Specify the IP addresses of nodes in your cluster. At least one is required for a node to join the cluster, but it is recommended to list addresses of all nodes. This way if the first node in the list is not available, the joining node can use other addresses.

    Note

    No addresses are required for the initial node in the cluster. However, it is recommended to specify them and properly bootstrap the first node. This will ensure that the node is able to rejoin the cluster if it goes down in the future.

    wsrep_node_name

    Specify the logical name for each individual node. If this variable is not specified, the host name will be used.

    wsrep_node_address

    Specify the IP address of this particular node.

    wsrep_sst_method

    By default, Percona XtraDB Cluster uses Percona XtraBackup for State Snapshot Transfer. xtrabackup-v2 is the only supported option for this variable. This method requires a user for SST to be set up on the initial node.

    pxc_strict_mode

    PXC Strict Mode is enabled by default and set to ENFORCING, which blocks the use of tech preview features and unsupported features in Percona XtraDB Cluster.

    binlog_format

    Galera supports only row-level replication, so set binlog_format=ROW.

    default_storage_engine

    Galera fully supports only the InnoDB storage engine. It will not work correctly with MyISAM or any other non-transactional storage engines. Set this variable to default_storage_engine=InnoDB.

    innodb_autoinc_lock_mode

    Galera supports only interleaved (2) lock mode for InnoDB. Setting the traditional (0) or consecutive (1) lock mode can cause replication to fail due to unresolved deadlocks. Set this variable to innodb_autoinc_lock_mode=2.

    "},{"location":"configure-nodes.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"copyright-and-licensing-information.html","title":"Copyright and licensing information","text":""},{"location":"copyright-and-licensing-information.html#documentation-licensing","title":"Documentation licensing","text":"

    Percona XtraDB Cluster documentation is (C)2009-2023 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.

    "},{"location":"copyright-and-licensing-information.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"crash-recovery.html","title":"Crash recovery","text":"

    Unlike the standard MySQL replication, a PXC cluster acts like one logical entity, which controls the status and consistency of each node as well as the status of the whole cluster. This allows maintaining the data integrity more efficiently than with traditional asynchronous replication without losing safe writes on multiple nodes at the same time.

    However, there are scenarios where the database service can stop with no node being able to serve requests.

    "},{"location":"crash-recovery.html#scenario-1-node-a-is-gracefully-stopped","title":"Scenario 1: Node A is gracefully stopped","text":"

    In a three node cluster (node A, Node B, node C), one node (node A, for example) is gracefully stopped: for the purpose of maintenance, configuration change, etc.

    In this case, the other nodes receive a \u201cgood bye\u201d message from the stopped node and the cluster size is reduced; some properties like quorum calculation or auto increment are automatically changed. As soon as node A is started again, it joins the cluster based on its wsrep_cluster_address variable in my.cnf.

    If the writeset cache (gcache.size) on nodes B and/or C still has all the transactions executed while node A was down, joining is possible via IST. If IST is impossible due to missing transactions in donor\u2019s gcache, the fallback decision is made by the donor and SST is started automatically.

    "},{"location":"crash-recovery.html#scenario-2-two-nodes-are-gracefully-stopped","title":"Scenario 2: Two nodes are gracefully stopped","text":"

    Similar to Scenario 1: Node A is gracefully stopped, the cluster size is reduced to 1 \u2014 even the single remaining node C forms the primary component and is able to serve client requests. To get the nodes back into the cluster, you just need to start them.

    However, when a new node joins the cluster, node C will be switched to the \u201cDonor/Desynced\u201d state as it has to provide the state transfer at least to the first joining node. It is still possible to read/write to it during that process, but it may be much slower, which depends on how large amount of data should be sent during the state transfer. Also, some load balancers may consider the donor node as not operational and remove it from the pool. So, it is best to avoid the situation when only one node is up.

    If you restart node A and then node B, you may want to make sure note B does not use node A as the state transfer donor: node A may not have all the needed writesets in its gcache. Specify node C node as the donor in your configuration file and start the mysql service:

    $ systemctl start mysql\n

    See also

    Galera Documentation: wsrep_sst_donor option

    "},{"location":"crash-recovery.html#scenario-3-all-three-nodes-are-gracefully-stopped","title":"Scenario 3: All three nodes are gracefully stopped","text":"

    The cluster is completely stopped and the problem is to initialize it again. It is important that a PXC node writes its last executed position to the grastate.dat file.

    By comparing the seqno number in this file, you can see which is the most advanced node (most likely the last stopped). The cluster must be bootstrapped using this node, otherwise nodes that had a more advanced position will have to perform the full SST to join the cluster initialized from the less advanced one. As a result, some transactions will be lost). To bootstrap the first node, invoke the startup script like this:

    $ systemctl start mysql@bootstrap.service\n

    Note

    Even though you bootstrap from the most advanced node, the other nodes have a lower sequence number. They will still have to join via the full SST because the Galera Cache is not retained on restart.

    For this reason, it is recommended to stop writes to the cluster before its full shutdown, so that all nodes can stop at the same position. See also pc.recovery.

    "},{"location":"crash-recovery.html#scenario-4-one-node-disappears-from-the-cluster","title":"Scenario 4: One node disappears from the cluster","text":"

    This is the case when one node becomes unavailable due to power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, etc.

    Two remaining nodes notice the connection to node A is down and start trying to re-connect to it. After several timeouts, node A is removed from the cluster. The quorum is saved (2 out of 3 nodes are up), so no service disruption happens. After it is restarted, node A joins automatically (as described in Scenario 1: Node A is gracefully stopped).

    "},{"location":"crash-recovery.html#scenario-5-two-nodes-disappear-from-the-cluster","title":"Scenario 5: Two nodes disappear from the cluster","text":"

    Two nodes are not available and the remaining node (node C) is not able to form the quorum alone. The cluster has to switch to a non-primary mode, where MySQL refuses to serve any SQL queries. In this state, the mysqld process on node C is still running and can be connected to but any statement related to data fails with an error

    > SELECT * FROM test.sbtest1;\n
    The error message
    ERROR 1047 (08S01): WSREP has not yet prepared node for application use\n

    Reads are possible until node C decides that it cannot access node A and node B. New writes are forbidden.

    As soon as the other nodes become available, the cluster is formed again automatically. If node B and node C were just network-severed from node A, but they can still reach each other, they will keep functioning as they still form the quorum.

    If node A and node B crashed, you need to enable the primary component on node C manually, before you can bring up node A and node B. The command to do this is:

    > SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n

    This approach only works if the other nodes are down before doing that! Otherwise, you end up with two clusters having different data.

    See also

    Adding Nodes to Cluster

    "},{"location":"crash-recovery.html#scenario-6-all-nodes-went-down-without-a-proper-shutdown-procedure","title":"Scenario 6: All nodes went down without a proper shutdown procedure","text":"

    This scenario is possible in case of a datacenter power failure or when hitting a MySQL or Galera bug. Also, it may happen as a result of data consistency being compromised where the cluster detects that each node has different data. The grastate.dat file is not updated and does not contain a valid sequence number (seqno). It may look like this:

    $ cat /var/lib/mysql/grastate.dat\n# GALERA saved state\nversion: 2.1\nuuid: 220dcdcb-1629-11e4-add3-aec059ad3734\nseqno: -1\nsafe_to_bootstrap: 0\n

    In this case, you cannot be sure that all nodes are consistent with each other. We cannot use safe_to_bootstrap variable to determine the node that has the last transaction committed as it is set to 0 for each node. An attempt to bootstrap from such a node will fail unless you start mysqld with the --wsrep-recover parameter:

    $ mysqld --wsrep-recover\n

    Search the output for the line that reports the recovered position after the node UUID (1122 in this case):

    Expected output
    ...\n... [Note] WSREP: Recovered position: 220dcdcb-1629-11e4-add3-aec059ad3734:1122\n...\n

    The node where the recovered position is marked by the greatest number is the best bootstrap candidate. In its grastate.dat file, set the safe_to_bootstrap variable to 1. Then, bootstrap from this node.

    Note

    After a shutdown, you can boostrap from the node which is marked as safe in the grastate.dat file.

    ...\nsafe_to_bootstrap: 1\n...\n

    See also

    Galera Documentation Introducing the Safe-To-Bootstrap feature in Galera Cluster

    In recent Galera versions, the option pc.recovery (enabled by default) saves the cluster state into a file named gvwstate.dat on each member node. As the name of this option suggests (pc \u2013 primary component), it saves only a cluster being in the PRIMARY state. An example content of the file may look like this:

    cat /var/lib/mysql/gvwstate.dat\nmy_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9\n#vwbeg\nview_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3\nbootstrap: 0\nmember: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0\nmember: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0\nmember: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0\n#vwend\n

    We can see a three node cluster with all members being up. Thanks to this new feature, the nodes will try to restore the primary component once all the members start to see each other. This makes the PXC cluster automatically recover from being powered down without any manual intervention! In the logs we will see:

    "},{"location":"crash-recovery.html#scenario-7-the-cluster-loses-its-primary-state-due-to-split-brain","title":"Scenario 7: The cluster loses its primary state due to split brain","text":"

    For the purpose of this example, let\u2019s assume we have a cluster that consists of an even number of nodes: six, for example. Three of them are in one location while the other three are in another location and they lose network connectivity. It is best practice to avoid such topology: if you cannot have an odd number of real nodes, you can use an additional arbitrator (garbd) node or set a higher pc.weight to some nodes. But when the split brain happens any way, none of the separated groups can maintain the quorum: all nodes must stop serving requests and both parts of the cluster will be continuously trying to re-connect.

    If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command as described in Scenario 5: Two nodes disappear from the cluster

    > SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n

    After this, you are able to work on the manually restored part of the cluster, and the other half should be able to automatically re-join using IST as soon as the network link is restored.

    Warning

    If you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring a network link in this case will not make them re-join until the nodes are restarted and members specified in configuration file are connected again.

    Then, as the Galera replication model truly cares about data consistency: once the inconsistency is detected, nodes that cannot execute row change statement due to a data difference \u2013 an emergency shutdown will be performed and the only way to bring the nodes back to the cluster is via the full SST

    Based on material from Percona Database Performance Blog

    This article is based on the blog post Galera replication - how to recover a PXC cluster by Przemys\u0142aw Malkowski: https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

    "},{"location":"crash-recovery.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"data-at-rest-encryption.html","title":"Data at Rest Encryption","text":""},{"location":"data-at-rest-encryption.html#introduction","title":"Introduction","text":"

    Data at rest encryption refers to encrypting data stored on a disk on a server. If an unauthorized user accesses the data files from the file system, encryption ensures the user cannot read the file contents. Percona Server allows you to enable, disable, and apply encryptions to the following objects:

    The transit data is defined as data that is transmitted to another node or client. Encrypted transit data uses an SSL connection.

    Percona XtraDB Cluster 8.0 supports all data at rest generally-available encryption features available from Percona Server for MySQL 8.0.

    "},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_file-plugin","title":"Configure PXC to use keyring_file plugin","text":""},{"location":"data-at-rest-encryption.html#configuration","title":"Configuration","text":"

    Percona XtraDB Cluster inherits the Percona Server for MySQL behavior to configure the keyring_file plugin. The following example illustrates using the plugin. Review Use the kerying component or keyring plugin for the latest information on the keyring component and plugin.

    Note

    The keyring_file plugin should not be used for regulatory compliance.

    Install the plugin and add the following options in the configuration file:

    [mysqld]\nearly-plugin-load=keyring_file.so\nkeyring_file_data=<PATH>/keyring\n

    The SHOW PLUGINS statement checks if the plugin has been successfully loaded.

    Note

    PXC recommends the same configuration on all cluster nodes, and all nodes should have the keyring configured. A mismatch in the keyring configuration does not allow the JOINER node to join the cluster.

    If the user has a bootstrapped node with keyring enabled, then upcoming cluster nodes inherit the keyring (the encrypted key) from the DONOR node.

    "},{"location":"data-at-rest-encryption.html#usage","title":"Usage","text":"

    XtraBackup re-encrypts the data using a transition-key and the JOINER node re-encrypts it using a newly generated master-key.

    Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible, as in higher version JOINER can join from lower version DONOR, but not vice-versa.

    Percona XtraDB Cluster does not allow the combination of nodes with encryption and nodes without encryption to maintain data consistency. For example, the user creates node-1 with encryption (keyring) enabled and node-2 with encryption (keyring) disabled. If the user attempts to create a table with encryption on node-1, the creation fails on node-2, causing data inconsistency. A node fails to start if it fails to load the keyring plugin.

    Note

    If the user does not specify the keyring parameters, the node does not know that it must load the keyring. The JOINER node may start, but it eventually shuts down when the DML level inconsistency with encrypted tablespace is detected.

    If a node does not have an encrypted tablespace, the keyring is not generated, and the keyring file is empty. Creating an encrypted table on the node generates the keyring.

    In an operation that is local to the node, you can rotate the key as needed. The ALTER INSTANCE ROTATE INNODB MASTER KEY statement is not replicated on cluster.

    The JOINER node generates its keyring.

    "},{"location":"data-at-rest-encryption.html#compatibility","title":"Compatibility","text":"

    Keyring (or, more generally, the Percona XtraDB Cluster SST process) is backward compatible. A higher version JOINER can join from lower version DONOR, but not vice-versa.

    "},{"location":"data-at-rest-encryption.html#configure-pxc-to-use-keyring_vault-plugin","title":"Configure PXC to use keyring_vault plugin","text":""},{"location":"data-at-rest-encryption.html#keyring_vault","title":"keyring_vault","text":"

    The keyring_vault plugin allows storing the master-key in vault-server (vs. local file as in case of keyring_file).

    Warning

    The rsync tool does not support the keyring_vault. Any rysnc-SST on a joiner is aborted if the keyring_vault is configured.

    "},{"location":"data-at-rest-encryption.html#configuration_1","title":"Configuration","text":"

    Configuration options are the same as upstream. The my.cnf configuration file should contain the following options:

    [mysqld]\nearly-plugin-load=\"keyring_vault=keyring_vault.so\"\nkeyring_vault_config=\"<PATH>/keyring_vault_n1.conf\"\n

    Also, keyring_vault_n1.conf file should contain the following:

    vault_url = http://127.0.0.1:8200\nsecret_mount_point = secret1\ntoken = e0345eb4-35dd-3ddd-3b1e-e42bb9f2525d\nvault_ca = /data/keyring_vault_confs/vault_ca.crt\n

    The detailed description of these options can be found in the upstream documentation.

    Vault-server is an external server, so make sure the PXC node can reach the server.

    Note

    Percona XtraDB Cluster recommends using the same keyring_plugin type on all cluster nodes. Mixing the keyring plugin types is recommended only while transitioning from keyring_file -> keyring_vault or vice-versa.

    All nodes do not need to refer to the same vault server. Whatever vault server is used, it must be accessible from the respective node. All nodes do not need to use the same mount point.

    If the node is not able to reach or connect to the vault server, an error is notified during the server boot, and the node refuses to start:

    The warning message
    2018-05-29T03:54:33.859613Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:54:33.977145Z 0 [ERROR] Plugin keyring_vault reported:\n'CURL returned this error code: 7 with error message : Failed to connect\nto 127.0.0.1 port 8200: Connection refused'\n

    If some nodes of the cluster are unable to connect to vault-server, this relates only to these specific nodes: e.g., if node-1 can connect, and node-2 cannot connect, only node-2 refuses to start. Also, if the server has a pre-existing encrypted object and on reboot, the server fails to connect to the vault-server, the object is not accessible.

    In case when vault-server is accessible, but authentication credential is incorrect, the consequences are the same, and the corresponding error looks like the following:

    The warning message
    2018-05-29T03:58:54.461911Z 0 [Warning] Plugin keyring_vault reported:\n'There is no vault_ca specified in keyring_vault's configuration file.\nPlease make sure that Vault's CA certificate is trusted by the machine\nfrom which you intend to connect to Vault.'\n2018-05-29T03:58:54.577477Z 0 [ERROR] Plugin keyring_vault reported:\n'Could not retrieve list of keys from Vault. Vault has returned the\nfollowing error(s): [\"permission denied\"]'\n

    In case of an accessible vault-server with the wrong mount point, there is no error during server boot, but the node still refuses to start:

    mysql> CREATE TABLE t1 (c1 INT, PRIMARY KEY pk(c1)) ENCRYPTION='Y';\n
    Expected output
    ERROR 3185 (HY000): Can't find master key from keyring, please check keyring\nplugin is loaded.\n\n... [ERROR] Plugin keyring_vault reported: 'Could not write key to Vault. ...\n... [ERROR] Plugin keyring_vault reported: 'Could not flush keys to keyring'\n
    "},{"location":"data-at-rest-encryption.html#mix-keyring-plugin-types","title":"Mix keyring plugin types","text":"

    With XtraBackup introducing transition-key logic, it is now possible to mix and match keyring plugins. For example, the user has node-1 configured to use the keyring_file plugin and node-2 configured to use keyring_vault.

    Note

    Percona recommends the same configuration for all the nodes of the cluster. A mix and match in keyring plugin types is recommended only during the transition from one keying type to another.

    "},{"location":"data-at-rest-encryption.html#temporary-file-encryption","title":"Temporary file encryption","text":""},{"location":"data-at-rest-encryption.html#migrate-keys-between-keyring-keystores","title":"Migrate keys between keyring keystores","text":"

    Percona XtraDB Cluster supports key migration between keystores. The migration can be performed offline or online.

    "},{"location":"data-at-rest-encryption.html#offline-migration","title":"Offline migration","text":"

    In offline migration, the node to migrate is shut down, and the migration server takes care of migrating keys for the said server to a new keystore.

    For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file. To migrate the n2 node to use keyring_vault, use the following procedure:

    1. Shut down the n2 node.

    2. Start the Migration Server (mysqld with a special option).

    3. The Migration Server copies the keys from the n2 keyring file and adds them to the vault server.

    4. Start the n2 node with the vault parameter, and the keys are available.

    Here is how the migration server output should look like:

    Expected output
    /dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node2/keyring \\\n--keyring-migration-destination=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/vault/keyring_vault.cnf &\n\n... [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use\n    --explicit_defaults_for_timestamp server option (see documentation for more details).\n... [Note] --secure-file-priv is set to NULL. Operations related to importing and\n    exporting data are disabled\n... [Warning] WSREP: Node is not a cluster node. Disabling pxc_strict_mode\n... [Note] /dev/shm/pxc80/bin/mysqld (mysqld 8.0-debug) starting as process 5710 ...\n... [Note] Keyring migration successful.\n

    On a successful migration, the destination keystore receives additional migrated keys (pre-existing keys in the destination keystore are not touched or removed). The source keystore retains the keys as the migration performs a copy operation and not a move operation.

    If the migration fails, the destination keystore is unchanged.

    "},{"location":"data-at-rest-encryption.html#online-migration","title":"Online migration","text":"

    In online migration, the node to migrate is kept running, and the migration server takes care of migrating keys for the said server to a new keystore by connecting to the node.

    For example, a cluster has three Percona XtraDB Cluster nodes, n1, n2, and n3. The nodes use the keyring_file. Migrate the n3 node to use keyring_vault using the following procedure:

    1. Start the Migration Server (mysqld with a special option).

    2. The Migration Server copies the keys from the n3 keyring file and adds them to the vault server.

    3. Restart the n3 node with the vault parameter, and the keys are available.

    /dev/shm/pxc80/bin/mysqld --defaults-file=/dev/shm/pxc80/copy_mig.cnf \\\n--keyring-migration-source=keyring_vault.so \\\n--keyring_vault_config=/dev/shm/pxc80/keyring_vault3.cnf \\\n--keyring-migration-destination=keyring_file.so \\\n--keyring_file_data=/dev/shm/pxc80/node3/keyring \\\n--keyring-migration-host=localhost \\\n--keyring-migration-user=root \\\n--keyring-migration-port=16300 \\\n--keyring-migration-password='' &\n

    On a successful migration, the destination keystore receives the additional migrated keys. Any pre-existing keys in the destination keystore are unchanged. The source keystore retains the keys as the migration performs a copy operation and not a move operation.

    If the migration fails, the destination keystore is not changed.

    "},{"location":"data-at-rest-encryption.html#migration-server-options","title":"Migration server options","text":"

    Prerequisite for migration:

    Make sure to pass required keyring options and other configuration parameters for the two keyring plugins. For example, if keyring_file is one of the plugins, you must explicitly configure the keyring_file_data system variable in the my.cnf file.

    Other non-keyring options may be required as well. One way to specify these options is by using --defaults-file to name an option file that contains the required options.

    [mysqld]\nbasedir=/dev/shm/pxc80\ndatadir=/dev/shm/pxc80/copy_mig\nlog-error=/dev/shm/pxc80/logs/copy_mig.err\nsocket=/tmp/copy_mig.sock\nport=16400\n

    See also

    Encrypt traffic documentation

    Percona Server for MySQL Documentation: Data-at-Rest Encryption https://www.percona.com/doc/percona-server/8.0/security/data-at-rest-encryption.html#data-at-rest-encryption

    "},{"location":"data-at-rest-encryption.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"docker.html","title":"Running Percona XtraDB Cluster in a Docker Container","text":"

    Docker images of Percona XtraDB Cluster are hosted publicly on Docker Hub at https://hub.docker.com/r/percona/percona-xtradb-cluster/.

    For more information about using Docker, see the Docker Docs. Make sure that you are using the latest version of Docker. The ones provided via apt and yum may be outdated and cause errors.

    We gather Telemetry data in the Percona packages and Docker images.

    Note

    By default, Docker pulls the image from Docker Hub if the image is not available locally.

    The image contains only the most essential binaries for Percona XtraDB Cluster to run. Some utilities included in a Percona Server for MySQL or MySQL installation might be missing from the Percona XtraDB Cluster Docker image.

    The following procedure describes how to set up a simple 3-node cluster for evaluation and testing purposes. Do not use these instructions in a production environment because the MySQL certificates generated in this procedure are self-signed. For a production environment, you should generate and store the certificates to be used by Docker.

    In this procedure, all of the nodes run Percona XtraDB Cluster 8.0 in separate containers on one host:

    1. Create a ~/pxc-docker-test/config directory.

    2. Create a custom.cnf file with the following contents, and place the file in the new directory:

      [mysqld]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n\n[client]\nssl-ca = /cert/ca.pem\nssl-cert = /cert/client-cert.pem\nssl-key = /cert/client-key.pem\n\n[sst]\nencrypt = 4\nssl-ca = /cert/ca.pem\nssl-cert = /cert/server-cert.pem\nssl-key = /cert/server-key.pem\n
    3. Create a cert directory and generate self-signed SSL certificates on the host node:

      $ mkdir -m 777 -p ~/pxc-docker-test/cert\ndocker run --name pxc-cert --rm -v ~/pxc-docker-test/cert:/cert\npercona/percona-xtradb-cluster:8.0 mysql_ssl_rsa_setup -d /cert\n
    4. Create a Docker network:

      $ docker network create pxc-network\n
    5. Bootstrap the cluster (create the first node):

      $ docker run -d \\\n  -e MYSQL_ROOT_PASSWORD=test1234# \\\n  -e CLUSTER_NAME=pxc-cluster1 \\\n  --name=pxc-node1 \\\n  --net=pxc-network \\\n  -v ~/pxc-docker-test/cert:/cert \\\n  -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n  percona/percona-xtradb-cluster:8.0\n
    6. Join the second node:

      $ docker run -d \\\n  -e MYSQL_ROOT_PASSWORD=test1234# \\\n  -e CLUSTER_NAME=pxc-cluster1 \\\n  -e CLUSTER_JOIN=pxc-node1 \\\n  --name=pxc-node2 \\\n  --net=pxc-network \\\n  -v ~/pxc-docker-test/cert:/cert \\\n  -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n  percona/percona-xtradb-cluster:8.0\n
    7. Join the third node:

      $ docker run -d \\\n  -e MYSQL_ROOT_PASSWORD=test1234# \\\n  -e CLUSTER_NAME=pxc-cluster1 \\\n  -e CLUSTER_JOIN=pxc-node1 \\\n  --name=pxc-node3 \\\n  --net=pxc-network \\\n  -v ~/pxc-docker-test/cert:/cert \\\n  -v ~/pxc-docker-test/config:/etc/percona-xtradb-cluster.conf.d \\\n  percona/percona-xtradb-cluster:8.0\n

    To verify the cluster is available, do the following:

    1. Access the MySQL client. For example, on the first node:

      $ sudo docker exec -it pxc-node1 /usr/bin/mysql -uroot -ptest1234#\n
      Expected output
      mysql: [Warning] Using a password on the command line interface can be insecure.\nWelcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 12\n...\nYou are enforcing ssl connection via unix socket. Please consider\nswitching ssl off as it does not make connection via unix socket\nany more secure\n\nmysql>\n
    2. View the wsrep status variables:

      mysql> show status like 'wsrep%';\n
      Expected output
      +------------------------------+-------------------------------------------------+\n| Variable_name                | Value                                           |\n+------------------------------+-------------------------------------------------+\n| wsrep_local_state_uuid       | 625318e2-9e1c-11e7-9d07-aee70d98d8ac            |\n...\n| wsrep_local_state_comment    | Synced                                          |\n...\n| wsrep_incoming_addresses     | 172.18.0.2:3306,172.18.0.3:3306,172.18.0.4:3306 |\n...\n| wsrep_cluster_conf_id        | 3                                               |\n| wsrep_cluster_size           | 3                                               |\n| wsrep_cluster_state_uuid     | 625318e2-9e1c-11e7-9d07-aee70d98d8ac            |\n| wsrep_cluster_status         | Primary                                         |\n| wsrep_connected              | ON                                              |\n...\n| wsrep_ready                  | ON                                              |\n+------------------------------+-------------------------------------------------+\n59 rows in set (0.02 sec)\n
    "},{"location":"docker.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"encrypt-traffic.html","title":"Encrypt PXC traffic","text":"

    There are two kinds of traffic in Percona XtraDB Cluster:

    1. Client-server traffic (the one between client applications and cluster nodes),

    2. Replication traffic, that includes SST, IST, write-set replication, and various service messages.

    Percona XtraDB Cluster supports encryption for all types of traffic. Replication traffic encryption can be configured either automatically or manually.

    "},{"location":"encrypt-traffic.html#encrypt-client-server-communication","title":"Encrypt client-server communication","text":"

    Percona XtraDB Cluster uses the underlying MySQL encryption mechanism to secure communication between client applications and cluster nodes.

    MySQL generates default key and certificate files and places them in the data directory. You can override auto-generated files with manually created ones, as described in the section Generate keys and certificates manually.

    The auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes.

    Specify the following settings in the my.cnf configuration file for each node:

    [mysqld]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n\n[client]\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/client-cert.pem\nssl-key=/etc/mysql/certs/client-key.pem\n

    After it is restarted, the node uses these files to encrypt communication with clients. MySQL clients require only the second part of the configuration to communicate with cluster nodes.

    MySQL generates the default key and certificate files and places them in the data directory. You can either use them or generate new certificates. For generation of new certificate please refer to Generate keys and certificates manually section.

    "},{"location":"encrypt-traffic.html#encrypt-replication-traffic","title":"Encrypt replication traffic","text":"

    Replication traffic refers to the inter-node traffic which includes the SST traffic, IST traffic, and replication traffic.

    The traffic of each type is transferred via a different channel, and so it is important to configure secure channels for all 3 variants to completely secure the replication traffic.

    Percona XtraDB Cluster supports a single configuration option which helps to secure the complete replication traffic, and is often referred to as SSL automatic configuration. You can also configure the security of each channel by specifying independent parameters.

    "},{"location":"encrypt-traffic.html#ssl-automatic-configuration","title":"SSL automatic configuration","text":"

    The automatic configuration of the SSL encryption needs a key and certificate files. MySQL generates a default key and certificate files and places them in the data directory.

    Important

    It is important that your cluster use the same SSL certificates on all nodes.

    "},{"location":"encrypt-traffic.html#enable-pxc-encrypt-cluster-traffic","title":"Enable pxc-encrypt-cluster-traffic","text":"

    Percona XtraDB Cluster includes the pxc-encrypt-cluster-traffic variable that enables automatic configuration of SSL encryption there-by encrypting SST, IST, and replication traffic.

    By default, pxc-encrypt-cluster-traffic is enabled thereby using a secured channel for replication. This variable is not dynamic and so it cannot be changed at runtime.

    Enabled, pxc-encrypt-cluster-traffic has the effect of applying the following settings: encrypt, ssl_key, ssl-ca, ssl-cert.

    Setting pxc-encrypt-cluster-traffic=ON has the effect of applying the following settings in the my.cnf configuration file:

    [mysqld]\nwsrep_provider_options=\u201dsocket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem\u201d\n\n[sst]\nencrypt=4\nssl-key=server-key.pem\nssl-ca=ca.pem\nssl-cert=server-cert.pem\n

    For wsrep_provider_options, only the mentioned options are affected (socket.ssl_key, socket,ssl_cert, and socket.ssl_ca), the rest is not modified.

    Important

    Disabling pxc-encrypt-cluster-traffic

    The default value of pxc-encrypt-cluster-traffic helps improve the security of your system.

    When pxc-encrypt-cluster-traffic is not enabled, anyone with the access to your network can connect to any PXC node either as a client or as another node joining the cluster. This potentially lets them query your data or get a complete copy of it.

    If you must disable pxc-encrypt-cluster-traffic, you need to stop the cluster and update [mysqld] section of the configuration file: pxc-encrypt-cluster-traffic=OFF of each node. Then, restart the cluster.

    The automatic configuration of the SSL encryption needs key and certificate files. MySQL generates default key and certificate files and places them in data directory. These auto-generated files are suitable for automatic SSL configuration, but you should use the same key and certificate files on all nodes. Also you can override auto-generated files with manually created ones, as covered in Generate keys and certificates manually.

    The necessary key and certificate files are first searched at the ssl-ca, ssl-cert, and ssl-key options under [mysqld]. If these options are not set, the data directory is searched for ca.pem, server-cert.pem, and server-key.pem files.

    Note

    The [sst] section is not searched.

    If all three files are found, they are used to configure encryption. If any of the files is missing, a fatal error is generated.

    "},{"location":"encrypt-traffic.html#ssl-manual-configuration","title":"SSL manual configuration","text":"

    If user wants to enable encryption for specific channel only or use different certificates or other mix-match, then user can opt for manual configuration. This helps to provide more flexibility to end-users.

    To enable encryption manually, the location of the required key and certificate files shoud be specified in the Percona XtraDB Cluster configuration. If you do not have the necessary files, see Generate keys and certificates manually.

    Note

    Encryption settings are not dynamic. To enable it on a running cluster, you need to restart the entire cluster.

    There are three aspects of Percona XtraDB Cluster operation, where you can enable encryption:

    "},{"location":"encrypt-traffic.html#encrypt-sst-traffic","title":"Encrypt SST traffic","text":"

    This refers to full data transfer that usually occurs when a new node (JOINER) joins the cluster and receives data from an existing node (DONOR).

    For more information, see State snapshot transfer.

    Note

    If keyring_file plugin is used, then SST encryption is mandatory: when copying encrypted data via SST, the keyring must be sent over with the files for decryption. In this case following options are to be set in my.cnf on all nodes:

    early-plugin-load=keyring_file.so\nkeyring-file-data=/path/to/keyring/file\n

    The cluster will not work if keyring configuration across nodes is different.

    The only available SST method is xtrabackup-v2 which uses Percona XtraBackup.

    "},{"location":"encrypt-traffic.html#xtrabackup","title":"xtrabackup","text":"

    This is the only available SST method (the wsrep_sst_method is always set to xtrabackup-v2), which uses Percona XtraBackup to perform non-blocking transfer of files. For more information, see Percona XtraBackup SST Configuration.

    Encryption mode for this method is selected using the encrypt option:

    To enable encryption for SST using XtraBackup, specify the location of the keys and certificate files in the each node\u2019s configuration under [sst]:

    [sst]\nencrypt=4\nssl-ca=/etc/mysql/certs/ca.pem\nssl-cert=/etc/mysql/certs/server-cert.pem\nssl-key=/etc/mysql/certs/server-key.pem\n

    Note

    SSL clients require DH parameters to be at least 1024 bits, due to the logjam vulnerability. However, versions of socat earlier than 1.7.3 use 512-bit parameters. If a dhparams.pem file of required length is not found during SST in the data directory, it is generated with 2048 bits, which can take several minutes. To avoid this delay, create the dhparams.pem file manually and place it in the data directory before joining the node to the cluster:

    $ openssl dhparam -out /path/to/datadir/dhparams.pem 2048\n

    For more information, see this blog post.

    "},{"location":"encrypt-traffic.html#encrypt-replicationist-traffic","title":"Encrypt replication/IST traffic","text":"

    Replication traffic refers to the following:

    All this traffic is transferred via the same underlying communication channel (gcomm). Securing this channel will ensure that IST traffic, write-set replication, and service messages are encrypted. (For IST, a separate channel is configured using the same configuration parameters, so 2 sections are described together).

    To enable encryption for all these processes, define the paths to the key, certificate and certificate authority files using the following wsrep provider options:

    To set these options, use the wsrep_provider_options variable in the configuration file:

    $ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/ca.pem;socket.ssl_cert=/etc/mysql/certs/server-cert.pem;socket.ssl_key=/etc/mysql/certs/server-key.pem\"\n

    Note

    You must use the same key and certificate files on all nodes, preferably those used for Encrypt client-server communication.

    Check upgrade-certificate section on how to upgrade existing certificates.

    "},{"location":"encrypt-traffic.html#generate-keys-and-certificates-manually","title":"Generate keys and certificates manually","text":"

    As mentioned above, MySQL generates default key and certificate files and places them in the data directory. If you want to override these certificates, the following new sets of files can be generated:

    These files should be generated using OpenSSL.

    Note

    The Common Name value used for the server and client keys and certificates must differ from that value used for the CA certificate.

    Generate CA key and certificateGenerate server key and certificateGenerate client key and certificate

    The Certificate Authority is used to verify the signature on certificates.

    1. Generate the CA key file:

      $ openssl genrsa 2048 > ca-key.pem\n
    2. Generate the CA certificate file:

      $ openssl req -new -x509 -nodes -days 3600\n    -key ca-key.pem -out ca.pem\n
    1. Generate the server key file:

      $ openssl req -newkey rsa:2048 -days 3600 \\\n    -nodes -keyout server-key.pem -out server-req.pem\n
    2. Remove the passphrase:

      $ openssl rsa -in server-key.pem -out server-key.pem\n
    3. Generate the server certificate file:

      $ openssl x509 -req -in server-req.pem -days 3600 \\\n    -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n    -out server-cert.pem\n
    1. Generate the client key file:

      $ openssl req -newkey rsa:2048 -days 3600 \\\n    -nodes -keyout client-key.pem -out client-req.pem\n
    2. Remove the passphrase:

      $ openssl rsa -in client-key.pem -out client-key.pem\n
    3. Generate the client certificate file:

      $ openssl x509 -req -in client-req.pem -days 3600 \\\n   -CA ca.pem -CAkey ca-key.pem -set_serial 01 \\\n   -out client-cert.pem\n
    "},{"location":"encrypt-traffic.html#verify-certificates","title":"Verify certificates","text":"

    To verify that the server and client certificates are correctly signed by the CA certificate, run the following command:

    $ openssl verify -CAfile ca.pem server-cert.pem client-cert.pem\n

    If the verification is successful, you should see the following output:

    server-cert.pem: OK\nclient-cert.pem: OK\n
    "},{"location":"encrypt-traffic.html#failed-validation-caused-by-matching-cn","title":"Failed validation caused by matching CN","text":"

    Sometimes, an SSL configuration may fail if the certificate and the CA files contain the same .

    To check if this is the case run openssl command as follows and verify that the CN field differs for the Subject and Issuer lines.

    $ openssl x509 -in server-cert.pem -text -noout\n

    Incorrect values

    Certificate:\nData:\nVersion: 1 (0x0)\nSerial Number: 1 (0x1)\nSignature Algorithm: sha256WithRSAEncryption\nIssuer: CN=www.percona.com, O=Database Performance., C=US\n...\nSubject: CN=www.percona.com, O=Database Performance., C=AU\n...\n

    To obtain a more compact output run openssl specifying -subject and -issuer parameters:

    $ openssl x509 -in server-cert.pem -subject -issuer -noout\n
    Expected output
    subject= /CN=www.percona.com/O=Database Performance./C=AU\nissuer= /CN=www.percona.com/O=Database Performance./C=US\n
    "},{"location":"encrypt-traffic.html#deploy-keys-and-certificates","title":"Deploy keys and certificates","text":"

    Use a secure method (for example, scp or sftp) to send the key and certificate files to each node. Place them under the /etc/mysql/certs/ directory or similar location where you can find them later.

    Note

    Make sure that this directory is protected with proper permissions. Most likely, you only want to give read permissions to the user running mysqld.

    The following files are required:

    This file is used to verify signatures.

    These files are used to secure database server activity and write-set replication traffic.

    These files are required only if the node should act as a MySQL client. For example, if you are planning to perform SST using mysqldump.

    Note

    Upgrade certificates subsection covers the details on upgrading certificates, if necessary.

    "},{"location":"encrypt-traffic.html#upgrade-certificates","title":"Upgrade certificates","text":"

    The following procedure shows how to upgrade certificates used for securing replication traffic when there are two nodes in the cluster.

    1. Restart the first node with the socket.ssl_ca option set to a combination of the the old and new certificates in a single file.

      For example, you can merge contents of old-ca.pem and new-ca.pem into upgrade-ca.pem as follows:

      $ cat old-ca.pem > upgrade-ca.pem && \\\ncat new-ca.pem >> upgrade-ca.pem\n

      Set the wsrep_provider_options variable as follows:

      $ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/upgrade-ca.pem;socket.ssl_cert=/etc/mysql/certs/old-cert.pem;socket.ssl_key=/etc/mysql/certs/old-key.pem\"\n
    2. Restart the second node with the socket.ssl_ca, socket.ssl_cert, and socket.ssl_key options set to the corresponding new certificate files.

      $ wsrep_provider_options=\"socket.ssl=yes;socket.ssl_ca=/etc/mysql/certs/new-ca.pem;socket.ssl_cert=/etc/mysql/certs/new-cert.pem;socket.ssl_key=/etc/mysql/certs/new-key.pem\"\n
    3. Restart the first node with the new certificate files, as in the previous step.

    4. You can remove the old certificate files.

    "},{"location":"encrypt-traffic.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"failover.html","title":"Cluster failover","text":"

    Cluster membership is determined simply by which nodes are connected to the rest of the cluster; there is no configuration setting explicitly defining the list of all possible cluster nodes. Therefore, every time a node joins the cluster, the total size of the cluster is increased and when a node leaves (gracefully) the size is decreased.

    The size of the cluster is used to determine the required votes to achieve quorum. A quorum vote is done when a node or nodes are suspected to no longer be part of the cluster (they do not respond). This no response timeout is the evs.suspect_timeout setting in the wsrep_provider_options (default 5 sec), and when a node goes down ungracefully, write operations will be blocked on the cluster for slightly longer than that timeout.

    Once a node (or nodes) is determined to be disconnected, then the remaining nodes cast a quorum vote, and if the majority of nodes from before the disconnect are still still connected, then that partition remains up. In the case of a network partition, some nodes will be alive and active on each side of the network disconnect. In this case, only the quorum will continue. The partition(s) without quorum will change to non-primary state.

    As a consequence, it\u2019s not possible to have safe automatic failover in a 2 node cluster, because failure of one node will cause the remaining node to become non-primary. Moreover, any cluster with an even number of nodes (say two nodes in two different switches) have some possibility of a split brain situation, when neither partition is able to retain quorum if connection between them is lost, and so they both become non-primary.

    Therefore, for automatic failover, the rule of 3s is recommended. It applies at various levels of your infrastructure, depending on how far the cluster is spread out to avoid single points of failure. For example:

    These rules will prevent split brain situations and ensure automatic failover works correctly.

    "},{"location":"failover.html#use-an-arbitrator","title":"Use an arbitrator","text":"

    If it is too expensive to add a third node, switch, network, or datacenter, you should use an arbitrator. An arbitrator is a voting member of the cluster that can receive and relay replication, but it does not persist any data, and runs its own daemon instead of mysqld. Placing even a single arbitrator in a 3rd location can add split brain protection to a cluster that is spread across only two nodes/locations.

    "},{"location":"failover.html#recover-a-non-primary-cluster","title":"Recover a non-primary cluster","text":"

    It is important to note that the rule of 3s applies only to automatic failover. In the event of a 2-node cluster (or in the event of some other outage that leaves a minority of nodes active), the failure of one node will cause the other to become non-primary and refuse operations. However, you can recover the node from non-primary state using the following command:

    SET GLOBAL wsrep_provider_options='pc.bootstrap=true';\n

    This will tell the node (and all nodes still connected to its partition) that it can become a primary cluster. However, this is only safe to do when you are sure there is no other partition operating in primary as well, or else Percona XtraDB Cluster will allow those two partitions to diverge (and you will end up with two databases that are impossible to re-merge automatically).

    For example, assume there are two data centers, where one is primary and one is for disaster recovery, with an even number of nodes in each. When an extra arbitrator node is run only in the primary data center, the following high availability features will be available:

    "},{"location":"failover.html#other-reading","title":"Other reading","text":""},{"location":"failover.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"faq.html","title":"Frequently asked questions","text":""},{"location":"faq.html#how-do-i-report-bugs","title":"How do I report bugs?","text":"

    All bugs can be reported on JIRA. Please submit error.log files from all the nodes.

    "},{"location":"faq.html#how-do-i-solve-locking-issues-like-auto-increment","title":"How do I solve locking issues like auto-increment?","text":"

    For auto-increment,\u00a0Percona XtraDB Cluster changes auto_increment_offset for each new node. In a single-node workload, locking is handled in the same way as InnoDB. In case of write load on several nodes, Percona XtraDB Cluster uses optimistic locking and the application may receive lock error in response to COMMIT query.

    "},{"location":"faq.html#what-if-a-node-crashes-and-innodb-recovery-rolls-back-some-transactions","title":"What if a node crashes and InnoDB recovery rolls back some transactions?","text":"

    When a node crashes, after restarting, it will copy the whole dataset from another\u00a0node (if there were changes to data since the crash).

    "},{"location":"faq.html#how-can-i-check-the-galera-node-health","title":"How can I check the Galera node health?","text":"

    To check the health of a Galera node, use the following query:

    mysql> SELECT 1 FROM dual;\n

    The following results of the previous query are possible:

    You can also check a node\u2019s health with the clustercheck script. First set up the clustercheck user:

    mysql> CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD\n'*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> GRANT PROCESS ON *.* TO 'clustercheck'@'localhost';\n

    You can then check a node\u2019s health by running the clustercheck script:

    $ /usr/bin/clustercheck clustercheck password 0\n

    If the node is running, you should get the following status:

    HTTP/1.1 200 OK\nContent-Type: text/plain\nConnection: close\nContent-Length: 40\n\nPercona XtraDB Cluster Node is synced.\n

    In case node isn\u2019t synced or if it is offline, status will look like:

    HTTP/1.1 503 Service Unavailable\nContent-Type: text/plain\nConnection: close\nContent-Length: 44\n\nPercona XtraDB Cluster Node is not synced.\n

    Note

    The clustercheck script has the following syntax:

    <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>

    Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local

    Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local

    "},{"location":"faq.html#how-does-percona-xtradb-cluster-handle-big-transactions","title":"How does Percona XtraDB Cluster handle big transactions?","text":"

    Percona XtraDB Cluster populates write set in memory before replication, and this sets the limit for the size of transactions that make sense. There are wsrep variables for maximum row count and maximum size of write set to make sure that the server does not run out of memory.

    "},{"location":"faq.html#is-it-possible-to-have-different-table-structures-on-the-nodes","title":"Is it possible to have different table structures on the nodes?","text":"

    For example, if there are four nodes, with four tables: sessions_a, sessions_b, sessions_c, and sessions_d, and you want each table in a separate node, this is not possible for InnoDB tables. However, it will work for MEMORY tables.

    "},{"location":"faq.html#what-if-a-node-fails-or-there-is-a-network-issue-between-nodes","title":"What if a node fails or there is a network issue between nodes?","text":"

    The quorum mechanism in\u00a0Percona XtraDB Cluster will decide which nodes can accept traffic and will shut down the nodes that do not belong to the quorum. Later when the failure is fixed, the nodes will need to copy data from the working cluster.

    The algorithm for quorum is Dynamic Linear Voting (DLV). The quorum is preserved if (and only if) the sum weight of the nodes in a new component strictly exceeds half that of the preceding Primary Component, minus the nodes which left gracefully.

    The mechanism is described in detail in Galera documentation.

    "},{"location":"faq.html#how-would-the-quorum-mechanism-handle-split-brain","title":"How would the quorum mechanism handle split brain?","text":"

    The quorum mechanism cannot handle split brain. If there is no way to decide on the primary component, Percona XtraDB Cluster has no way to resolve a split brain. The minimal recommendation is to have 3 nodes. However, it is possibile to allow a node to handle traffic with the following option:

    wsrep_provider_options=\"pc.ignore_sb = yes\"\n
    "},{"location":"faq.html#why-a-node-stops-accepting-commands-if-the-other-one-fails-in-a-2-node-setup","title":"Why a node stops accepting commands if the other one fails in a 2-node setup?","text":"

    This is expected behavior to prevent split brain. For more information, see previous question or Galera documentation.

    "},{"location":"faq.html#is-it-possible-to-set-up-a-cluster-without-state-transfer","title":"Is it possible to set up a cluster without state transfer?","text":"

    It is possible in two ways:

    1. By default, Galera reads starting position from a text file <datadir>/grastate.dat. Make this file identical on all nodes, and there will be no state transfer after starting a node.

    2. Use the wsrep_start_position variable to start the nodes with the same UUID:seqno value.

    "},{"location":"faq.html#what-tcp-ports-are-used-by-percona-xtradb-cluster","title":"What TCP ports are used by Percona XtraDB Cluster?","text":"

    You may need to open up to four ports if you are using a firewall:

    1. Regular MySQL port (default is 3306).

    2. Port for group communication (default is 4567). It can be changed using the following option:

      wsrep_provider_options =\"gmcast.listen_addr=tcp://0.0.0.0:4010; \"\n
    3. Port for State Snaphot Transfer (default is 4444). It can be changed using the following option:

      wsrep_sst_receive_address=10.11.12.205:5555\n
    4. Port for Incremental State Transfer (default is port for group communication + 1 or 4568). It can be changed using the following option:

      wsrep_provider_options = \"ist.recv_addr=10.11.12.206:7777; \"\n
    "},{"location":"faq.html#is-there-async-mode-or-only-sync-commits-are-supported","title":"Is there \u201casync\u201d mode or only \u201csync\u201d commits are supported?","text":"

    Percona XtraDB Cluster does not support \u201casync\u201d mode, all commits are synchronous on all nodes. To be precise, the commits are \u201cvirtually\u201d synchronous, which means that the transaction should pass certification on nodes, not physical commit. Certification means a guarantee that the transaction does not have conflicts with other transactions on the corresponding node.

    "},{"location":"faq.html#does-it-work-with-regular-mysql-replication","title":"Does it work with regular MySQL replication?","text":"

    Yes. On the node you are going to use as source, you should enable log-bin and log-slave-update options.

    "},{"location":"faq.html#why-the-init-script-etcinitdmysql-does-not-start","title":"Why the init script (/etc/init.d/mysql) does not start?","text":"

    Try to disable SELinux with the following command:

    $ echo 0 > /selinux/enforce\n
    "},{"location":"faq.html#what-does-nc-invalid-option-d-in-the-ssterr-log-file-mean","title":"What does \u201cnc: invalid option \u2013 \u2018d\u2019\u201d in the sst.err log file mean?","text":"

    This error is specific to Debian and Ubuntu. Percona XtraDB Cluster uses netcat-openbsd package. This dependency has been fixed. Future releases of Percona XtraDB Cluster will be compatible with any netcat (see bug PXC-941).

    "},{"location":"faq.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"garbd-howto.html","title":"Set up Galera arbitrator","text":"

    The size of a cluster increases when a node joins the cluster and decreases when a node leaves. A cluster reacts to replication problems with inconsistency voting. The size of the cluster determines the required votes to achieve a quorum. If a node no longer responds and is disconnected from the cluster the remaining nodes vote. The majority of the nodes that vote are considered to be in the cluster.

    The arbitrator is important if you have an even number of nodes remaining in the cluster. The arbitrator keeps the number of nodes as an odd number, which avoids the split-brain situation.

    A Galera Arbitrator is a lightweight member of a Percona XtraDB Cluster. This member can vote but does not do any replication and is not included in flow control calculations. The Galera Arbitrator is a separate daemon called garbd. You can start this daemon separately from the cluster and run this daemon either as a service or from the shell. You cannot configure this daemon using the my.cnf file.

    Note

    For more information on how to set up a cluster you can read in the Configuring Percona XtraDB Cluster on Ubuntu or Configuring Percona XtraDB Cluster on CentOS manuals.

    "},{"location":"garbd-howto.html#installation","title":"Installation","text":"

    Galera Arbitrator does not need a dedicated server and can be installed on a machine running other applications. The server must have good network connectivity.

    Galera Arbitrator can be installed from Percona\u2019s repository on Debian/Ubuntu distributions with the following command:

    root@ubuntu:~# apt install percona-xtradb-cluster-garbd\n

    Galera Arbitrator can be installed from Percona\u2019s repository on RedHat or derivative distributions with the following command:

    [root@centos ~]# yum install percona-xtradb-cluster-garbd\n
    "},{"location":"garbd-howto.html#start-garbd-and-configuration","title":"Start garbd and configuration","text":"

    Note

    On Percona XtraDB Cluster 8.0, SSL is enabled by default. To run the Galera Arbitrator, you must copy the SSL certificates and configure garbd to use the certificates.

    It is necessary to specify the cipher. In this example, it is AES128-SHA256. If you do not specify the cipher, an error occurs with a \u201cTerminate called after throwing an instance of \u2018gnu::NotSet\u2019\u201d message.

    For more information, see socket.ssl_cipher

    When starting from the shell, you can set the parameters from the command line or edit the configuration file. This is an example of starting from the command line:

    $ garbd --group=my_ubuntu_cluster \\\n--address=\"gcomm://192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\" \\\n--option=\"socket.ssl=YES; socket.ssl_key=/etc/ssl/mysql/server-key.pem; \\\nsocket.ssl_cert=/etc/ssl/mysql/server-cert.pem; \\\nsocket.ssl_ca=/etc/ssl/mysql/ca.pem; \\\nsocket.ssl_cipher=AES128-SHA256\"\n

    To avoid entering the options each time you start garbd, edit the options in the configuration file. To configure Galera Arbitrator on Ubuntu/Debian, edit the /etc/default/garb file. On RedHat or derivative distributions, the configuration can be found in /etc/sysconfig/garb file.

    The configuration file should look like this after the installation and before you have added your parameters:

    # Copyright (C) 2013-2015 Codership Oy\n# This config file is to be sourced by garb service script.\n\n# REMOVE THIS AFTER CONFIGURATION\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\n# GALERA_NODES=\"\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\n# GALERA_GROUP=\"\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"\"\n

    Add the parameter information about the cluster. For this document, we use the cluster information from Configuring Percona XtraDB Cluster on Ubuntu.

    Note

    Please note that you need to remove the # REMOVE THIS AFTER CONFIGURATION line before you can start the service.

    # This config file is to be sourced by garb service script.\n\n# A comma-separated list of node addresses (address[:port]) in the cluster\nGALERA_NODES=\"192.168.70.61:4567, 192.168.70.62:4567, 192.168.70.63:4567\"\n\n# Galera cluster name, should be the same as on the rest of the nodes.\nGALERA_GROUP=\"my_ubuntu_cluster\"\n\n# Optional Galera internal options string (e.g. SSL settings)\n# see http://galeracluster.com/documentation-webpages/galeraparameters.html\n# GALERA_OPTIONS=\"socket.ssl_cert=/etc/ssl/mysql/server-key.pem;socket./etc/ssl/mysql/server-key.pem\"\n\n# Log file for garbd. Optional, by default logs to syslog\n# Deprecated for CentOS7, use journalctl to query the log for garbd\n# LOG_FILE=\"/var/log/garbd.log\"\n

    You can now start the Galera Arbitrator daemon (garbd) by running:

    On Debian or UbuntuOn Red Hat Enterprise Linux or CentOS
    root@server:~# service garbd start\n
    Expected output
    [ ok ] Starting /usr/bin/garbd: :.\n

    Note

    On systems that run systemd as the default system and service manager, use systemctl instead of service to invoke the command. Currently, both are supported.

    root@server:~# systemctl start garb\n
    root@server:~# service garb start\n
    Expected output
    [ ok ] Starting /usr/bin/garbd: :.\n

    Additionally, you can check the arbitrator status by running:

    On Debian or UbuntuOn Red Hat Enterprise Linux or CentOS
    root@server:~# service garbd status\n
    Expected output
    [ ok ] garb is running.\n
    root@server:~# service garb status\n
    Expected output
    [ ok ] garb is running.\n
    "},{"location":"garbd-howto.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"gcache-record-set-cache-difference.html","title":"Understand GCache and Record-Set cache","text":"

    In Percona XtraDB Cluster, there is a concept of GCache and Record-Set cache (which can also be called transaction write-set cache). The use of these two caches is often confusing if you are running long transactions, because both of them result in the creation of disk-level files. This manual describes what their main differences are.

    "},{"location":"gcache-record-set-cache-difference.html#record-set-cache","title":"Record-Set cache","text":"

    When you run a long-running transaction on any particular node, it will try to append a key for each row that it tries to modify (the key is a unique identifier for the row {db,table,pk.columns}). This information is cached in out-write-set, which is then sent to the group for certification.

    Keys are cached in HeapStore (which has page-size=64K and total-size=4MB). If the transaction data-size outgrows this limit, then the storage is switched from Heap to Page (which has page-size=64MB and total-limit=free-space-on-disk).

    All these limits are non-configurable, but having a memory-page size greater than 4MB per transaction can cause things to stall due to memory pressure, so this limit is reasonable. This is another limitation to address when Galera supports large transaction.

    The same long-running transaction will also generate binlog data that also appends to out-write-set on commit (HeapStore->FileStore). This data can be significant, as it is a binlog image of rows inserted/updated/deleted by the transaction. The wsrep_max_ws_size variable controls the size of this part of the write-set. The threshold doesn\u2019t consider size allocated for caching-keys and the header.

    If FileStore is used, it creates a file on the disk (with names like xxxx_keys and xxxx_data) to store the cache data. These files are kept until a transaction is committed, so the lifetime of the transaction is linked.

    When the node is done with the transaction and is about to commit, it will generate the final-write-set using the two files (if the data size grew enough to use FileStore) plus HEADER, and will publish it for certification to cluster.

    The native node executing the transaction will also act as subscription node, and will receive its own write-set through the cluster publish mechanism. This time, the native node will try to cache write-set into its GCache. How much data GCache retains is controlled by the GCache configuration.

    "},{"location":"gcache-record-set-cache-difference.html#gcache","title":"GCache","text":"

    GCache holds the write-set published on the cluster for replication. The lifetime of write-set in GCache is not transaction-linked.

    When a JOINER node needs an IST, it will be serviced through this GCache (if possible).

    GCache will also create the files to disk. You can read more about it here.

    At any given point in time, the native node has two copies of the write-set: one in GCache and another in Record-Set Cache.

    For example, lets say you INSERT/UPDATE 2 million rows in a table with the following schema.

    (int, char(100), char(100) with pk (int, char(100))\n

    It will create write-set key/data files in the background similar to the following:

    -rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000000\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000001\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_data.000002\n-rw------- 1 xxx xxx 67108864 Apr 11 12:26 0x00000707_keys.000000\n
    "},{"location":"gcache-record-set-cache-difference.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"gcache-write-set-cache-encryption.html","title":"GCache encryption and Write-Set cache encryption","text":"

    These features are tech preview. Before using these features in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    "},{"location":"gcache-write-set-cache-encryption.html#gcache-and-write-set-cache-encryption","title":"GCache and Write-Set cache encryption","text":"

    Enabling this feature encrypts the Galera GCache and Write-Set cache files with a File Key.

    GCache has a RingBuffer on-disk file to manage write-sets. The keyring only stores the Master Key which is used to encrypt the File Key used by the RingBuffer file. The encrypted File Key is stored in the RingBuffer\u2019s preamble. The RingBuffer file of GCache is non-volatile, which means this file survives a restart. The File Key is not stored for GCache off-pages and Write-Set cache files.

    See also

    For more information, see Understanding GCache and Record-set Cache, and the Percona Database Performance Blog: All you need to know about GCache

    Sample preamble key-value pairs
    Version: 2\nGID: 3afaa71d-6665-11ed-98de-2aba4aabc65e\nsynced: 0\nenc_version: 1\nenc_encrypted: 1\nenc_mk_id: 3\nenc_mk_const_id: 3ad045a2-6665-11ed-a49d-cb7b9d88753f\nenc_mk_uuid: 3ad04c8e-6665-11ed-a947-c7e346da147f\nenc_fk_id: S4hRiibUje4v5GSQ7a+uuS6NBBX9+230nsPHeAXH43k=\nenc_crc: 279433530\n
    "},{"location":"gcache-write-set-cache-encryption.html#key-descriptions","title":"Key descriptions","text":"

    The following table describes the encryption keys defined in the preamble. All other keys in the preamble are not related to encryption.

    Key Description enc_version The encryption version enc_encrypted If the GCache is encrypted or not enc_mk_id A part of the Master Key ID. Rotating the Master Key increments the sequence number. enc_mk_const_id A part of the Master Key ID, a constant Universally unique identifier (UUID). This option remains constant for the duration of the galera.gcache file and simplifies matching the Masater Key inside the keyring to the instance that generated the keys. Deleting the galera.gcache changes the value of this key. enc_mk_uuid The first Master Key or if Galera detects that the preamble is inconsistent, which causes a full GCache reset and a new Master Key is required, generates this UUID. enc_fk_id The File Key ID encrypted with the Master Key. enc_crc The cyclic redundancy check (CRC) calculated from all encryption-related keys."},{"location":"gcache-write-set-cache-encryption.html#controlling-encryption","title":"Controlling encryption","text":"

    Encryption is controlled using the wsrep_provider_options.

    Variable name Default value Allowed values gcache.encryption off on/off gcache.encryption_cache_page_size 32KB 2-512 gcache.encryption_cache_size 16MB 2 - 512 allocator.disk_pages_encryption off on/off allocator.encryption_cache_page_size 32KB allocator.encryption_cache_size 16MB"},{"location":"gcache-write-set-cache-encryption.html#rotate-the-gcache-master-key","title":"Rotate the GCache Master Key","text":"

    GCache and Write-Set cache encryption uses either a keyring plugin or a keyring component. This plugin or component must be loaded.

    Store the keyring file outside the data directory when using a keyring plugin or a keyring component.

    mysql> ALTER INSTANCE ROTATE GCACHE MASTER KEY;\n
    "},{"location":"gcache-write-set-cache-encryption.html#variable-descriptions","title":"Variable descriptions","text":""},{"location":"gcache-write-set-cache-encryption.html#gcache-encryption","title":"GCache encryption","text":"

    The following sections describe the variables related to GCache encryption. All variables are read-only.

    "},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption","title":"gcache.encryption","text":"

    Enable or disable GCache cache encryption.

    "},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_page_size","title":"gcache.encryption_cache_page_size","text":"

    The size of the GCache encryption page. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.

    "},{"location":"gcache-write-set-cache-encryption.html#gcacheencryption_cache_size","title":"gcache.encryption_cache_size","text":"

    Every encrypted file has an encryption.cache, which consists of pages. Use gcache.encryption_cache_size to configure the encryption.cache size.

    Configure the page size in the cache with gcache.encryption_cache_page_size.

    The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.

    The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.

    "},{"location":"gcache-write-set-cache-encryption.html#write-set-cache-encryption","title":"Write-Set cache encryption","text":"

    The following sections describe the variables related to Write-Set cache encryption. All variables are read-only.

    "},{"location":"gcache-write-set-cache-encryption.html#allocatordisk_pages_encryption","title":"allocator.disk_pages_encryption","text":"

    Enable or disable the Write-Set cache encryption.

    "},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_page_size","title":"allocator.encryption_cache_page_size","text":"

    The size of the encryption cache for Write-Set pages. The value must be multiple of the CPU page size (typically 4kB). If the value is not, the server reports an error and stops.

    "},{"location":"gcache-write-set-cache-encryption.html#allocatorencryption_cache_size","title":"allocator.encryption_cache_size","text":"

    Every Write-Set encrypted file has an encryption.cache, which consists of pages. Use allocator.encryption_cache_size to configure the size of the encryption.cache.

    Configure the page size in the cache with allocator.encryption_cache_page_size.

    The maximum size for the encryption.cache is 512 pages. This value is a hint. If the value is larger than the maximum, the value is rounded down to 512 x gcache.encryption_cache_page_size.

    The minimum size for the encryption.cache is 2 pages. If the value is smaller, the value is rounded up.

    "},{"location":"gcache-write-set-cache-encryption.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"get-started-cluster.html","title":"Get started with Percona XtraDB Cluster","text":"

    This guide describes the procedure for setting up Percona XtraDB Cluster.

    Examples provided in this guide assume there are three Percona XtraDB Cluster nodes, as a common choice for trying out and testing:

    Node Host IP Node 1 pxc1 192.168.70.61 Node 2 pxc2 192.168.70.62 Node 3 pxc3 192.168.70.63

    Note

    Avoid creating a cluster with two or any even number of nodes, because this can lead to split brain.

    The following procedure provides an overview with links to details for every step:

    It is recommended to install from official Percona repositories:

    This includes path to the Galera library, location of other nodes, etc.

    This must be the node with your main database, which will be used as the data source for the cluster.

    Data on new nodes joining the cluster is overwritten in order to synchronize it with the cluster.

    Although cluster initialization and node provisioning is performed automatically, it is a good idea to ensure that changes on one node actually replicate to other nodes.

    To complete the deployment of the cluster, a high-availability proxy is required. We recommend installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries.

    "},{"location":"get-started-cluster.html#percona-monitoring-and-management","title":"Percona Monitoring and Management","text":"

    Percona Monitoring and Management is the best choice for managing and monitoring Percona XtraDB Cluster performance. It provides visibility for the cluster and enables efficient troubleshooting.

    "},{"location":"get-started-cluster.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#frm","title":".frm","text":"

    For each table, the server will create a file with the .frm extension containing the table definition (for all storage engines).

    "},{"location":"glossary.html#acid","title":"ACID","text":"

    An acronym for Atomicity, Consistency, Isolation, Durability.

    "},{"location":"glossary.html#asynchronous-replication","title":"Asynchronous replication","text":"

    Asynchronous replication is a technique where data is first written to the primary node. After the primary acknowledges the write, the data is written to secondary nodes.

    "},{"location":"glossary.html#atomicity","title":"Atomicity","text":"

    This property guarantees that all updates of a transaction occur in the database or no updates occur. This guarantee also applies with a server exit. If a transaction fails, the entire operation rolls back.

    "},{"location":"glossary.html#cluster-replication","title":"Cluster replication","text":"

    Normal replication path for cluster members.\u00a0Can be encrypted (not by default) and unicast or multicast (unicast by default). Runs on tcp port 4567 by default.

    "},{"location":"glossary.html#consistency","title":"Consistency","text":"

    This property guarantees that each transaction that modifies the database takes it from one consistent state to another. Consistency is implied with Isolation.

    "},{"location":"glossary.html#datadir","title":"datadir","text":"

    The directory in which the database server stores its databases. Most Linux distribution use /var/lib/mysql by default.

    "},{"location":"glossary.html#donor-node","title":"donor node","text":"

    The node elected to provide a state transfer (SST or IST).

    "},{"location":"glossary.html#durability","title":"Durability","text":"

    Once a transaction is committed, it will remain so and is resistant to a server exit.

    "},{"location":"glossary.html#foreign-key","title":"Foreign Key","text":"

    A referential constraint between two tables. Example: A purchase order in the purchase_orders table must have been made by a customer that exists in the customers table.

    "},{"location":"glossary.html#general-availability-ga","title":"General availability (GA)","text":"

    A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.

    "},{"location":"glossary.html#gtid","title":"GTID","text":"

    Global Transaction ID, in Percona XtraDB Cluster it consists of UUID and an ordinal sequence number which denotes the position of the change in the sequence.

    "},{"location":"glossary.html#haproxy","title":"HAProxy","text":"

    HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the net.

    "},{"location":"glossary.html#ibdata","title":"ibdata","text":"

    Default prefix for tablespace files, e.g., ibdata1 is a 10MB auto-extendable file that MySQL creates for the shared tablespace by default.

    "},{"location":"glossary.html#isolation","title":"Isolation","text":"

    The Isolation guarantee means that no transaction can interfere with another. When transactions access data in a session, they also lock that data to prevent other operations on that data by other transaction.

    "},{"location":"glossary.html#ist","title":"IST","text":"

    Incremental State Transfer. Functionality which instead of whole state snapshot can catch up with the group by receiving the missing writesets, but only if the writeset is still in the donor\u2019s writeset cache.

    "},{"location":"glossary.html#innodb","title":"InnoDB","text":"

    Storage Engine for MySQL and derivatives (Percona Server, MariaDB) originally written by Innobase Oy, since acquired by Oracle. It provides ACID compliant storage engine with foreign key support. InnoDB is the default storage engine on all platforms.

    "},{"location":"glossary.html#jenkins","title":"Jenkins","text":"

    Jenkins is a continuous integration system that we use to help ensure the continued quality of the software we produce. It helps us achieve the aims of: * no failed tests in trunk on any platform * aid developers in ensuring merge requests build and test on all platforms * no known performance regressions (without a damn good explanation)

    "},{"location":"glossary.html#joiner-node","title":"joiner node","text":"

    The node joining the cluster, usually a state transfer target.

    "},{"location":"glossary.html#lsn","title":"LSN","text":"

    Log Serial Number. A term used in relation to the InnoDB or XtraDB storage engines. There are System-level LSNs and Page-level LSNs. The System LSN represents the most recent LSN value assigned to page changes. Each InnoDB page contains a Page LSN which is the max LSN for that page for changes that reside on the disk. This LSN is updated when the page is flushed to disk.

    "},{"location":"glossary.html#mariadb","title":"MariaDB","text":"

    A fork of MySQL that is maintained primarily by Monty Program AB. It aims to add features, fix bugs while maintaining 100% backwards compatibility with MySQL.

    "},{"location":"glossary.html#mycnf","title":"my.cnf","text":"

    This file refers to the database server\u2019s main configuration file. Most Linux distributions place it as /etc/mysql/my.cnf or /etc/my.cnf, but the location and name depends on the particular installation. Note that this is not the only way of configuring the server, some systems does not have one even and rely on the command options to start the server and its defaults values.

    "},{"location":"glossary.html#myisam","title":"MyISAM","text":"

    A MySQL Storage Engine that was the default until MySQL 5.5. It doesn\u2019t fully support transactions but in some scenarios may be faster than InnoDB. Each table is stored on disk in 3 files: .frm,i .MYD, .MYI.

    "},{"location":"glossary.html#mysql","title":"MySQL","text":"

    An open source database that has spawned several distributions and forks. MySQL AB was the primary maintainer and distributor until bought by Sun Microsystems, which was then acquired by Oracle. As Oracle owns the MySQL trademark, the term MySQL is often used for the Oracle distribution of MySQL as distinct from the drop-in replacements such as MariaDB and Percona Server.

    "},{"location":"glossary.html#mysqlpxcinternalsession","title":"mysql.pxc.internal.session","text":"

    This user is used by the SST process to run the SQL commands needed for SST, such as creating the mysql.pxc.sst.user and assigning it the role mysql.pxc.sst.role.

    "},{"location":"glossary.html#mysqlpxcsstrole","title":"mysql.pxc.sst.role","text":"

    This role has all the privileges needed to run xtrabackup to create a backup on the donor node.

    "},{"location":"glossary.html#mysqlpxcsstuser","title":"mysql.pxc.sst.user","text":"

    This user (set up on the donor node) is assigned the mysql.pxc.sst.role and runs the XtraBackup to make backups. The password for this is randomly generated for each SST. The password is generated automatically for each SST.

    "},{"location":"glossary.html#node","title":"node","text":"

    A cluster node \u2013 a single mysql instance that is in the cluster.

    "},{"location":"glossary.html#numa","title":"NUMA","text":"

    Non-Uniform Memory Access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. The whole system may still operate as one unit, and all memory is basically accessible from everywhere, but at a potentially higher latency and lower performance.

    "},{"location":"glossary.html#percona-server-for-mysql","title":"Percona Server for MySQL","text":"

    Percona\u2019s branch of MySQL with performance and management improvements.

    "},{"location":"glossary.html#percona-xtradb-cluster","title":"Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster (PXC) is a high availability solution for MySQL.

    "},{"location":"glossary.html#primary-cluster","title":"primary cluster","text":"

    A cluster with quorum.\u00a0A non-primary cluster will not allow any operations and will give Unknown command errors on any clients attempting to read or write from the database.

    "},{"location":"glossary.html#quorum","title":"quorum","text":"

    A majority (> 50%) of nodes.\u00a0In the event of a network partition, only the cluster partition that retains a quorum (if any) will remain Primary by default.

    "},{"location":"glossary.html#split-brain","title":"split brain","text":"

    Split brain occurs when two parts of a computer cluster are disconnected, each part believing that the other is no longer running. This problem can lead to data inconsistency.

    "},{"location":"glossary.html#sst","title":"SST","text":"

    State Snapshot Transfer is the full copy of data from one node to another. It\u2019s used when a new node joins the cluster, it has to transfer data from an existing node. Percona XtraDB Cluster: uses the xtrabackup program for this purpose. xtrabackup does not require READ LOCK for the entire syncing process - only for syncing the MySQL system tables and writing the information about the binlog, galera and replica information (same as the regular Percona XtraBackup backup).

    The SST method is configured with the wsrep_sst_method variable.

    In PXC 8.0, the mysql-upgrade command is now run automatically as part of SST. You do not have to run it manually when upgrading your system from an older version.

    "},{"location":"glossary.html#storage-engine","title":"Storage Engine","text":"

    A Storage Engine is a piece of software that implements the details of data storage and retrieval for a database system. This term is primarily used within the MySQL ecosystem due to it being the first widely used relational database to have an abstraction layer around storage. It is analogous to a Virtual File System layer in an Operating System. A VFS layer allows an operating system to read and write multiple file systems (for example, FAT, NTFS, XFS, ext3) and a Storage Engine layer allows a database server to access tables stored in different engines (e.g. MyISAM, InnoDB).

    "},{"location":"glossary.html#tech-preview","title":"Tech preview","text":"

    A tech preview item can be a feature, a variable, or a value within a variable. The term designates that the item is not yet ready for production use and is not included in support by SLA. A tech preview item is included in a release so that users can provide feedback. The item is either updated and released as general availability(GA) or removed if not useful. The item\u2019s functionality can change from tech preview to GA.

    "},{"location":"glossary.html#uuid","title":"UUID","text":"

    Universally Unique IDentifier which uniquely identifies the state and the sequence of changes node undergoes. 128-bit UUID is a classic DCE UUID Version 1 (based on current time and MAC address). Although in theory this UUID could be generated based on the real MAC-address, in the Galera it is always (without exception) based on the generated pseudo-random addresses (\u201clocally administered\u201d bit in the node address (in the UUID structure) is always equal to unity).

    "},{"location":"glossary.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"haproxy-config.html","title":"HAProxy configuration file","text":""},{"location":"haproxy-config.html#example-of-haproxy-v1-configuration-file","title":"Example of HAProxy v1 configuration file","text":"HAProxy v1 configuration file
    global\n        log 127.0.0.1   local0\n        log 127.0.0.1   local1 notice\n        maxconn 4096\n        uid 99\n        gid 99\n        daemon\n        #debug\n        #quiet\n\ndefaults\n        log     global\n        mode    http\n        option  tcplog\n        option  dontlognull\n        retries 3\n        redispatch\n        maxconn 2000\n        contimeout      5000\n        clitimeout      50000\n        srvtimeout      50000\n        timeout connect 160000\n        timeout client 240000\n        timeout server 240000\n\nlisten mysql-cluster 0.0.0.0:3306\n    mode tcp\n    balance roundrobin\n    option mysql-check user root\n\n    server db01 10.4.29.100:3306 check\n    server db02 10.4.29.99:3306 check\n    server db03 10.4.29.98:3306 check\n

    Options set in the configuration file

    "},{"location":"haproxy-config.html#differences-between-version-1-configuration-file-and-version-2-configuration-file","title":"Differences between version 1 configuration file and version 2 configuration file","text":""},{"location":"haproxy-config.html#version-declaration","title":"Version Declaration:","text":"

    v1: The configuration file typically omits an explicit version declaration.

    v2: You must explicitly declare the version using the version keyword followed by the specific version number (e.g., version = 2.0).

    "},{"location":"haproxy-config.html#global-parameters","title":"Global Parameters:","text":"

    v1 and v2: Both versions utilize a global section to define global parameters, but certain parameters might have different names or functionalities across versions. Refer to the official documentation for specific changes.

    "},{"location":"haproxy-config.html#configuration-blocks","title":"Configuration Blocks:","text":"

    v1 and v2: Both versions use a similar indentation-based structure to define configuration blocks like frontend and backend. However, v2 introduces new blocks and keywords not present in v1 (e.g., process, http-errors).

    "},{"location":"haproxy-config.html#directives","title":"Directives:","text":"

    v1 and v2: While many directives remain consistent, some might have renamed keywords, altered syntax, or entirely new functionalities in v2. Consult the official documentation for a comprehensive comparison of directives and their usage between versions.

    "},{"location":"haproxy-config.html#comments","title":"Comments:","text":"

    v1 and v2: Both versions support comments using the # symbol. However, v2 introduces multi-line comments using / \u2026 / syntax, which v1 does not support.

    "},{"location":"haproxy-config.html#version-2-configuration-file","title":"Version 2 configuration file","text":"

    This simplified example is for load balancing. HAProxy offers numerous features for advanced configurations and fine-tuning.

    This example demonstrates a basic HAProxy v2 configuration file for load-balancing HTTP traffic across two backend servers.

    "},{"location":"haproxy-config.html#global-section","title":"Global Section","text":"

    The following settings are defined in the Global section:

    In the defaults block, we set the operating mode to TCP and define option tcpka

    global\n    maxconn 4000           # Maximum concurrent connections (adjust as needed)\n    user haproxy          # User to run HAProxy process\n    group haproxy          # Group to run HAProxy process\n    stats socket /var/run/haproxy.sock mode 666 level admin\n\ndefaults\n    mode tcp             # Set operating mode to TCP\n    #option tcpka\n
    "},{"location":"haproxy-config.html#frontend-section","title":"Frontend Section","text":"

    The following settings are defined in this section:

    frontend gr-prod-rw\n    bind 0.0.0.0:3307     \n    mode tcp\n    option contstats\n    option dontlognull\n    option clitcpka\n    default_backend gr-prod-rw\n

    You should add the following options:

    option Description contstats Provides continuous updates to the statistics of your connections. This option ensures that your traffic counters are updated in real-time, rather than only after a connection closes, giving you a more accurate and immediate view of your traffic patterns. dontlognull Does not log requests that don\u2019t transfer any data, like health check pings. clitcpka Configures TCP keepalive settings for client connections. This option allows the operating system to detect and terminate inactive connections, even if HAProxy isn\u2019t actively checking them."},{"location":"haproxy-config.html#backend-section","title":"Backend Section","text":"

    In this section, you specify the backend servers that will handle requests forwarded by the frontend. List each server with their respective IP addresses, ports, and weights.

    You set up a health check with check inter 10000. This option means that HAProxy performs a health check on each server every 10,000 milliseconds or 10 seconds. If a server fails a health check, it is temporarily removed from the pool until it passes subsequent checks, ensuring smooth and reliable client service. This proactive monitoring is crucial for maintaining an efficient and uninterrupted backend service.

    Set the number of retries to put the service down and up. For example, you set the rise parameter to 1, which means the server only needs to pass one health check before the server is considered healthy again. The fall parameter is set to 2, requiring two consecutive failed health checks before the server is marked as unhealthy.

    The weight 50 backup setting is crucial for load balancing; this setting determines that this server only receives traffic if the primary servers are down. The weight of 50 indicates the relative amount of traffic the server will handle compared to other servers in the backup role. This method ensures the server can handle a significant load even in backup mode, but not as much as a primary server.

    The following example lists these options. Replace the server details (IP addresses, ports) with your backend server information. Adjust weights and other options according to your specific needs and server capabilities.

    backend servers\n    server server1 10.0.68.39:3307 check inter 10000 rise 1 fall 2 weight 50\n    server server1 10.0.68.74:3307 check inter 10000 rise 1 fall 2 weight 50 backup\n    server server1 10.0.68.20:3307 check inter 10000 rise 1 fall 2 weight 1 backup\n

    More information about how to configure HAProxy

    "},{"location":"haproxy-config.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"haproxy.html","title":"Load balancing with HAProxy","text":"

    The free and open source software, HAProxy, provides a high-availability load balancer and reverse proxy for TCP and HTTP-based applications. HAProxy can distribute requests across multiple servers, ensuring optimal performance and security.

    Here are the benefits of using HAProxy:

    "},{"location":"haproxy.html#create-a-user","title":"Create a user","text":"

    Access the server as a user with administrative privileges, either root or use sudo.

    Create a Dedicated HAProxy user account for HAProxy to interact with your MySQL instance. This account enhances security.

    Make the following changes to the example CREATE USER command to replace the placeholders:

    Execute the following command:

    mysql> CREATE USER 'haproxy_user'@'haproxy_server_ip' IDENTIFIED BY 'strong_password';\n

    Grant the minimal set of privileges necessary for HAProxy to perform its health checks and monitoring.

    Execute the following:

    GRANT SELECT ON `mysql`.* TO 'haproxy_user'@'haproxy_server_ip';\nFLUSH PRIVILEGES;\n
    "},{"location":"haproxy.html#important-considerations","title":"Important Considerations","text":"

    If your MySQL servers are part of a replication cluster, create the user and grant privileges on each node to ensure consistency.

    For enhanced security, consider restricting the haproxy_user to specific databases or tables to monitor rather than granting permissions to the entire mysql database schema.

    "},{"location":"haproxy.html#install","title":"Install","text":"

    Add the HAProxy Enterprise repository to your system by following the instructions for your operating system.

    Install HAProxy on the node you intend to use for load balancing. You can install it using the package manager.

    On a Debian-derived distributionOn a Red Hat-derived distribution
    $ sudo apt update\n$ sudo apt install haproxy\n
    $ sudo yum update\n$ sudo yum install haproxy\n

    To start HAProxy, use the haproxy command. You may pass any number of configuration parameters on the command line. To use a configuration file, add the -f option.

    $ # Passing one configuration file\n$ sudo haproxy -f haproxy-1.cfg\n\n$ # Passing multiple configuration files\n$ sudo haproxy -f haproxy-1.cfg haproxy-2.cfg\n\n$ # Passing a directory\n$ sudo haproxy -f conf-dir\n

    You can pass the name of an existing configuration file or a directory. HAProxy includes all files with the .cfg extension in the supplied directory. Another way to pass multiple files is to use -f multiple times.

    For more information, see HAProxy Management Guide

    For information, see HAProxy configuration file

    Important

    In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password. HAProxy does not support this authentication plugin. Create a mysql user using the mysql_native_password authentication plugin.

    mysql> CREATE USER 'haproxy_user'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\n

    See also

    MySQL Documentation: CREATE USER statement

    "},{"location":"haproxy.html#uninstall","title":"Uninstall","text":"

    To uninstall haproxy version 2 from a Linux system, follow the latest instructions.

    "},{"location":"haproxy.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"high-availability.html","title":"High availability","text":"

    In a basic setup with three nodes, if you take any of the nodes down, Percona XtraDB Cluster continues to function. At any point, you can shut down any node to perform maintenance or configuration changes.

    Even in unplanned situations (like a node crashing or if it becomes unavailable over the network), you can run queries on working nodes. If a node is down and the data has changed, there are two methods that the node may use when it joins the cluster again:

    Method What happens Description SST The joiner node receives a full copy of the database state from the donor node. You initiate a Solid State Transfer (SST) when adding a new node to a Galera cluster or when a node has fallen too far out of sync IST Only incremental changes are copied from one node to another. This operation can be used when a node is down for a short period."},{"location":"high-availability.html#sst","title":"SST","text":"

    The primary benefit of SST is that it ensures data consistency across the cluster by providing a complete snapshot of the database at a point in time. However, SST can be resource-intensive and time-consuming if the operation transfers significant data. The donor node is locked during this transfer, impacting cluster performance.

    You initiate a state snapshot transfer (SST) when a node joins a cluster without the complete data set. This process involves transferring a full data copy from one node to another, ensuring that the joining node has an exact replica of the cluster\u2019s current state. Technically, SST is performed by halting the donor node\u2019s database operations momentarily to create a consistent snapshot of its data. The snapshot is then transferred over the network to the joining node, which applies it to its database system.

    Even without locking your cluster in a read-only state, SST may be intrusive and disrupt the regular operation of your services. IST avoids disruption. A node fetches only the changes that happened while that node was unavailable. IST uses a caching mechanism on nodes.

    "},{"location":"high-availability.html#ist","title":"IST","text":"

    Incremental State Transfer (IST) is a method that allows a node to request only the missing transactions from another node in the cluster. This process is beneficial because it reduces the amount of data that must be transferred, leading to faster recovery times for nodes that are out of sync. Additionally, IST minimizes the network bandwidth required for state transfer, which is particularly advantageous in environments with limited resources.

    However, there are drawbacks to consider. Reliance on another node\u2019s state means that an SST operation is necessary if no node in the cluster has the required information.

    When a node joins the cluster with a state slightly behind the current cluster state, IST does not require the joining node to copy the entire database state. Technically, IST transfers only the missing write-sets that the joining node needs to catch up with the cluster. The donor node, the node with the most recent state, sends the write-sets to the joining node through a dedicated channel. The joining node then applies these write-sets to its database state incrementally until it synchronizes with the cluster\u2019s current state. The donor node can experience a performance impact during an IST operation, typically less severe than during SST.

    "},{"location":"high-availability.html#monitor-the-node-state","title":"Monitor the node state","text":"

    The wsrep_state_comment variable returns the current state of a Galera node in the cluster, providing information about the node\u2019s role and status. The value can vary depending on the specific state of the Galera node, such as the following:

    You can monitor the current state of a node using the following command:

    mysql> SHOW STATUS LIKE 'wsrep_local_state_comment';\n

    If the node is in Synced (6) state, that node is part of the cluster and can handle the traffic.

    "},{"location":"high-availability.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"install-index.html","title":"Install Percona XtraDB Cluster","text":"

    Install Percona XtraDB Cluster on all hosts that you are planning to use as cluster nodes and ensure that you have root access to the MySQL server on each one.

    We gather Telemetry data in the Percona packages and Docker images.

    "},{"location":"install-index.html#ports-required","title":"Ports required","text":"

    Open specific ports for the Percona XtraDB Cluster to function correctly.

    "},{"location":"install-index.html#recommendations","title":"Recommendations","text":"

    We recommend installing Percona XtraDB Cluster from official Percona software repositories using the corresponding package manager for your system:

    Important

    After installing Percona XtraDB Cluster, the mysql service is stopped but enabled so that it may start the next time you restart the system. The service starts if the the grastate.dat file exists and the value of seqno is not -1.

    See also

    More information about Galera state information in Index of files created by PXC grastat.dat

    "},{"location":"install-index.html#installation-alternatives","title":"Installation alternatives","text":"

    Percona also provides a generic tarball with all required files and binaries for manual installation:

    If you want to build Percona XtraDB Cluster from source, see Compiling and Installing from Source Code.

    If you want to run Percona XtraDB Cluster using Docker, see Running Percona XtraDB Cluster in a Docker Container.

    "},{"location":"install-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"intro.html","title":"About Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster is a fully open-source high-availability solution for MySQL. It integrates Percona Server for MySQL and Percona XtraBackup with the Galera library to enable synchronous multi-source replication.

    A cluster consists of nodes, where each node contains the same set of data synchronized accross nodes. The recommended configuration is to have at least 3 nodes, but you can have 2 nodes as well. Each node is a regular MySQL Server instance (for example, Percona Server). You can convert an existing MySQL Server instance to a node and run the cluster using this node as a base. You can also detach any node from the cluster and use it as a regular MySQL Server instance.

    "},{"location":"intro.html#benefits","title":"Benefits","text":""},{"location":"intro.html#drawbacks","title":"Drawbacks","text":""},{"location":"intro.html#components","title":"Components","text":"

    Percona XtraDB Cluster https://www.percona.com/software/mysql-database/percona-xtradb-cluster is based on Percona Server for MySQL running with the XtraDB storage engine. It uses the Galera library, which is an implementation of the write set replication (wsrep) API developed by Codership Oy. The default and recommended data transfer method is via Percona XtraBackup .

    "},{"location":"intro.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"limitation.html","title":"Percona XtraDB Cluster limitations","text":"

    The following limitations apply to Percona XtraDB Cluster:

    As of version 8.0.21, an INPLACE ALTER TABLE query takes an internal shared lock on the table during the execution of the query. The LOCK=NONE clause is no longer allowed for all of the INPLACE ALTER TABLE queries due to this change.

    This change addresses a deadlock, which could cause a cluster node to hang in the following scenario:

    Do not use one or more dot characters (.) when defining the values for the following variables:

    MySQL and XtraBackup handles the value in different ways and this difference causes unpredictable behavior.

    "},{"location":"limitation.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"load-balance-proxysql.html","title":"Load balance with ProxySQL","text":"

    ProxySQL is a high-performance SQL proxy. ProxySQL runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.

    The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.

    The proxy is designed to run continuously without needing to be restarted. Most configuration can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.

    See also

    ProxySQL Documentation

    ProxySQL v2 natively supports Percona XtraDB Cluster. With this version, proxysql-admin tool does not require any custom scripts to keep track of Percona XtraDB Cluster status.

    Important

    In version 8.0, Percona XtraDB Cluster does not support ProxySQL v1.

    "},{"location":"load-balance-proxysql.html#manual-configuration","title":"Manual configuration","text":"

    This section describes how to configure ProxySQL with three Percona XtraDB Cluster nodes.

    Node Host Name IP address Node 1 pxc1 192.168.70.71 Node 2 pxc2 192.168.70.72 Node 3 pxc3 192.168.70.73 Node 4 proxysql 192.168.70.74

    ProxySQL can be configured either using the /etc/proxysql.cnf file or through the admin interface. The admin interface is recommended because this interface can dynamically change the configuration without restarting the proxy.

    To connect to the ProxySQL admin interface, you need a mysql client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally. For this tutorial, install Percona XtraDB Cluster on Node 4:

    Changes in the installation procedure

    In Percona XtraDB Cluster 8.0, ProxySQL is not installed automatically as a dependency of the percona-xtradb-cluster-client-8.0 package. You should install the proxysql package separately.

    Note

    ProxySQL has multiple versions in the version 2 series.

    root@proxysql:~# apt install percona-xtradb-cluster-client\nroot@proxysql:~# apt install proxysql2\n
    $ sudo yum install Percona-XtraDB-Cluster-client-80\n$ sudo yum install proxysql2\n

    To connect to the admin interface, use the credentials, host name and port specified in the global variables.

    Warning

    Do not use default credentials in production!

    The following example shows how to connect to the ProxySQL admin interface with default credentials:

    root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql@proxysql>\n

    To see the ProxySQL databases and tables use the following commands:

    mysql@proxysql> SHOW DATABASES;\n

    The following output shows the ProxySQL databases:

    Expected output
    +-----+---------+-------------------------------+\n| seq | name    | file                          |\n+-----+---------+-------------------------------+\n| 0   | main    |                               |\n| 2   | disk    | /var/lib/proxysql/proxysql.db |\n| 3   | stats   |                               |\n| 4   | monitor |                               |\n+-----+---------+-------------------------------+\n4 rows in set (0.00 sec)\n
    mysql@proxysql> SHOW TABLES;\n

    The following output shows the ProxySQL tables:

    Expected output
    +--------------------------------------+\n| tables                               |\n+--------------------------------------+\n| global_variables                     |\n| mysql_collations                     |\n| mysql_query_rules                    |\n| mysql_replication_hostgroups         |\n| mysql_servers                        |\n| mysql_users                          |\n| runtime_global_variables             |\n| runtime_mysql_query_rules            |\n| runtime_mysql_replication_hostgroups |\n| runtime_mysql_servers                |\n| runtime_scheduler                    |\n| scheduler                            |\n+--------------------------------------+\n12 rows in set (0.00 sec)\n

    For more information about admin databases and tables, see Admin Tables

    Note

    The ProxySQL configuration can reside in the following areas:

    When you change a parameter, you change it in MEMORY area. This ability is by design and lets you test the changes before pushing the change to production (RUNTIME), or save the change to disk.

    "},{"location":"load-balance-proxysql.html#add-cluster-nodes-to-proxysql","title":"Add cluster nodes to ProxySQL","text":"

    To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.

    Note

    ProxySQL uses the concept of hostgroups to group cluster nodes. This enables you to balance the load in a cluster by routing different types of traffic to different groups. There are many ways you can configure hostgroups (for example, source and replicas, read and write load, etc.) and a every node can be a member of multiple hostgroups.

    This example adds three Percona XtraDB Cluster nodes to the default hostgroup (0), which receives both write and read traffic:

    mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.71',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.72',3306);\nmysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.73',3306);\n

    To see the nodes:

    mysql@proxysql> SELECT * FROM mysql_servers;\n

    The following output shows the list of nodes:

    Expected output
    +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| hostgroup_id | hostname      | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n| 0            | 192.168.70.71 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |\n| 0            | 192.168.70.72 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |\n| 0            | 192.168.70.73 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |\n+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+\n3 rows in set (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#create-proxysql-monitoring-user","title":"Create ProxySQL monitoring user","text":"

    To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE privilege on any node in the cluster and configure the user in ProxySQL.

    The following example shows how to add a monitoring user on Node 2:

    mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t';\nmysql@pxc2> GRANT USAGE ON *.* TO 'proxysql'@'%';\n

    The following example shows how to configure this user on the ProxySQL node:

    mysql@proxysql> UPDATE global_variables SET variable_value='proxysql'\n              WHERE variable_name='mysql-monitor_username';\nmysql@proxysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\n              WHERE variable_name='mysql-monitor_password';\n

    To load this configuration at runtime, issue a LOAD command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue a SAVE command.

    mysql@proxysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql@proxysql> SAVE MYSQL VARIABLES TO DISK;\n

    To ensure that monitoring is enabled, check the monitoring logs:

    mysql@proxysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+----------------------+---------------+\n| hostname      | port | time_start_us    | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627                 | NULL          |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447                 | NULL          |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
    mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+-------------------+------------+\n| hostname      | port | time_start_us    | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948               | NULL       |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803               | NULL       |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711               | NULL       |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783               | NULL       |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631               | NULL       |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542               | NULL       |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n

    The previous examples show that ProxySQL is able to connect and ping the nodes you have added.

    To enable monitoring of these nodes, load them at runtime:

    mysql@proxysql> LOAD MYSQL SERVERS TO RUNTIME;\n
    "},{"location":"load-balance-proxysql.html#create-proxysql-client-user","title":"Create ProxySQL client user","text":"

    ProxySQL must have users that can access backend nodes to manage connections.

    To add a user, insert credentials into mysql_users table:

    mysql@proxysql> INSERT INTO mysql_users (username,password) VALUES ('sbuser','sbpass');\n
    Expected output
    Query OK, 1 row affected (0.00 sec)\n

    Note

    ProxySQL currently doesn\u2019t encrypt passwords.

    Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):

    mysql@proxysql> LOAD MYSQL USERS TO RUNTIME;\nmysql@proxysql> SAVE MYSQL USERS TO DISK;\n

    To confirm that the user has been set up correctly, you can try to log in as root:

    root@proxysql:~# mysql -u sbuser -psbpass -h 127.0.0.1 -P 6033\n
    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n

    To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

    mysql@pxc3> CREATE USER 'sbuser'@'192.168.70.74' IDENTIFIED BY 'sbpass';\n
    Expected output
    Query OK, 0 rows affected (0.01 sec)\n
    mysql@pxc3> GRANT ALL ON *.* TO 'sbuser'@'192.168.70.74';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#test-cluster-with-sysbench","title":"Test cluster with sysbench","text":"

    You can install sysbench from Percona software repositories:

    root@proxysql:~# apt install sysbench\n
    root@proxysql:~# yum install sysbench\n

    Note

    sysbench requires ProxySQL client user credentials that you created in Creating ProxySQL Client User.

    1. Create the database that will be used for testing on one of the Percona XtraDB Cluster nodes:

      mysql@pxc1> CREATE DATABASE sbtest;\n
    2. Populate the table with data for the benchmark on the ProxySQL node:

      root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nprepare\n
    3. Run the benchmark on the ProxySQL node:

      root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \\\n--num-requests=0 --max-time=20 \\\n--test=/usr/share/doc/sysbench/tests/db/oltp.lua \\\n--mysql-user='sbuser' --mysql-password='sbpass' \\\n--oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \\\nrun\n

    ProxySQL stores collected data in the stats schema:

    mysql@proxysql> SHOW TABLES FROM stats;\n
    Expected output
    +--------------------------------+\n| tables                         |\n+--------------------------------+\n| stats_mysql_query_rules        |\n| stats_mysql_commands_counters  |\n| stats_mysql_processlist        |\n| stats_mysql_connection_pool    |\n| stats_mysql_query_digest       |\n| stats_mysql_query_digest_reset |\n| stats_mysql_global             |\n+--------------------------------+\n

    For example, to see the number of commands that run on the cluster:

    mysql@proxysql> SELECT * FROM stats_mysql_commands_counters;\n
    Expected output
    +---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| Command                   | Total_Time_us | Total_cnt | cnt_100us | cnt_500us | cnt_1ms | cnt_5ms | cnt_10ms | cnt_50ms | cnt_100ms | cnt_500ms | cnt_1s | cnt_5s | cnt_10s | cnt_INFs |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n| ALTER_TABLE               | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| ANALYZE_TABLE             | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| BEGIN                     | 2212625       | 3686      | 55        | 2162      | 899     | 569     | 1        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| CHANGE_REPLICATION_SOURCE | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| COMMIT                    | 21522591      | 3628      | 0         | 0         | 0       | 1765    | 1590     | 272      | 1         | 0         | 0      | 0      | 0       | 0        |\n| CREATE_DATABASE           | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| CREATE_INDEX              | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n...\n| DELETE                    | 2904130       | 3670      | 35        | 1546      | 1346    | 723     | 19       | 1        | 0         | 0         | 0      | 0      | 0       | 0        |\n| DESCRIBE                  | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n...\n| INSERT                    | 19531649      | 3660      | 39        | 1588      | 1292    | 723     | 12       | 2        | 0         | 1         | 0      | 1      | 2       | 0        |\n...\n| SELECT                    | 35049794      | 51605     | 501       | 26180     | 16606   | 8241    | 70       | 3        | 4         | 0         | 0      | 0      | 0       | 0        |\n| SELECT_FOR_UPDATE         | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n...\n| UPDATE                    | 6402302       | 7367      | 75        | 2503      | 3020    | 1743    | 23       | 3        | 0         | 0         | 0      | 0      | 0       | 0        |\n| USE                       | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n| SHOW                      | 19691         | 2         | 0         | 0         | 0       | 0       | 1        | 1        | 0         | 0         | 0      | 0      | 0       | 0        |\n| UNKNOWN                   | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |\n+---------------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+\n45 rows in set (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#automatic-failover","title":"Automatic failover","text":"

    ProxySQL will automatically detect if a node is not available or not synced with the cluster.

    You can check the status of all available nodes by running:

    mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n

    The following output shows the status of all available nodes:

    Expected output
    +--------------+---------------+------+--------+\n| hostgroup_id | hostname      | port | status |\n+--------------+---------------+------+--------+\n| 0            | 192.168.70.71 | 3306 | ONLINE |\n| 0            | 192.168.70.72 | 3306 | ONLINE |\n| 0            | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n

    To test problem detection and fail-over mechanism, shut down Node 3:

    root@pxc3:~# service mysql stop\n

    ProxySQL will detect that the node is down and update its status to OFFLINE_SOFT:

    mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
    Expected output
    +--------------+---------------+------+--------------+\n| hostgroup_id | hostname      | port | status       |\n+--------------+---------------+------+--------------+\n| 0            | 192.168.70.71 | 3306 | ONLINE       |\n| 0            | 192.168.70.72 | 3306 | ONLINE       |\n| 0            | 192.168.70.73 | 3306 | OFFLINE_SOFT |\n+--------------+---------------+------+--------------+\n3 rows in set (0.00 sec)\n

    Now start Node 3 again:

    root@pxc3:~# service mysql start\n

    The script will detect the change and mark the node as ONLINE:

    mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM runtime_mysql_servers;\n
    Expected output
    +--------------+---------------+------+--------+\n| hostgroup_id | hostname      | port | status |\n+--------------+---------------+------+--------+\n| 0            | 192.168.70.71 | 3306 | ONLINE |\n| 0            | 192.168.70.72 | 3306 | ONLINE |\n| 0            | 192.168.70.73 | 3306 | ONLINE |\n+--------------+---------------+------+--------+\n3 rows in set (0.00 sec)\n
    "},{"location":"load-balance-proxysql.html#assisted-maintenance-mode","title":"Assisted maintenance mode","text":"

    Usually, to take a node down for maintenance, you need to identify that node, update its status in ProxySQL to OFFLINE_SOFT, wait for ProxySQL to divert traffic from this node, and then initiate the shutdown or perform maintenance tasks. Percona XtraDB Cluster includes a special maintenance mode for nodes that enables you to take a node down without adjusting ProxySQL manually.

    Initiating pxc_maint_mode=MAINTENANCE does not disconnect existing connections. You must terminate these connections by either running your application code or forcing a re-connection. With a re-connection, the new connections are re-routed around the PXC node in MAINTENANCE mode.

    Assisted maintenance mode is controlled via the pxc_maint_mode variable, which is monitored by ProxySQL and can be set to one of the following values:

    Related sections

    Setting up a testing environment with ProxySQL

    "},{"location":"load-balance-proxysql.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"monitoring.html","title":"Monitor the cluster","text":"

    Each node can have a different view of the cluster. There is no centralized node to monitor. To track down the source of issues, you have to monitor each node independently.

    Values of many variables depend on the node from which you are querying. For example, replication sent from a node and writes received by all other nodes.

    Having data from all nodes can help you understand where flow messages are coming from, which node sends excessively large transactions, and so on.

    "},{"location":"monitoring.html#manual-monitoring","title":"Manual monitoring","text":"

    Manual cluster monitoring can be performed using myq-tools.

    "},{"location":"monitoring.html#alerting","title":"Alerting","text":"

    Besides standard MySQL alerting, you should use at least the following triggers specific to Percona XtraDB Cluster:

    wsrep_cluster_status != Primary

    wsrep_connected != ON

    wsrep_ready != ON

    For additional alerting, consider the following:

    "},{"location":"monitoring.html#metrics","title":"Metrics","text":"

    Cluster metrics collection for long-term graphing should be done at least for the following:

    wsrep_local_recv_queue and wsrep_local_send_queue

    wsrep_flow_control_sent and wsrep_flow_control_recv

    wsrep_replicated and wsrep_received

    wsrep_replicated_bytes and wsrep_received_bytes

    wsrep_local_cert_failures and wsrep_local_bf_aborts

    "},{"location":"monitoring.html#use-percona-monitoring-and-management","title":"Use Percona Monitoring and Management","text":"

    Percona Monitoring and Management includes two dashboards to monitor PXC:

    1. PXC/Galera Cluster Overview:

    2. PXC/Galera Graphs:

      These dashboards are available from the menu:

    Please refer to the official documentation for details on Percona Monitoring and Management installation and setup.

    "},{"location":"monitoring.html#other-reading","title":"Other reading","text":""},{"location":"monitoring.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"nbo.html","title":"Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU)","text":"

    An Online Schema Upgrade can be a daily issue in an environment with accelerated development and deployment. The task becomes more difficult as the data grows. An ALTER TABLE statement is a multi-step operation and must run until it is complete. Aborting the statement may be more expensive than letting it complete.

    The Non-Blocking Operations (NBO) method is similar to the TOI method (see Online Schema Upgrade for more information on the available types of online schema upgrades). Every replica processes the DDL statement at the same point in the cluster transaction stream, and other transactions cannot commit during the operation. The NBO method provides a more efficient locking strategy and avoids the TOI issue of long-running DDL statements blocking cluster updates.

    In the NBO method, the supported DDL statement acquires a metadata lock on the table or schema at a late stage of the operation. The lock_wait_timeout system variable defines the timeout, measured in seconds, to acquire metadata locks. The default value, 3153600, could cause infinite waits and should not be used with the NBO method.

    Attempting a State Snapshot Transfer (SST) fails during the NBO operation.

    To dynamically set the NBO mode in the client, run the following statement:

    SET SESSION wsrep_OSU_method='NBO';\n
    "},{"location":"nbo.html#supported-ddl-statements","title":"Supported DDL statements","text":"

    The NBO method supports the following DDL statements:

    "},{"location":"nbo.html#limitations","title":"Limitations","text":"

    The NBO method does not support the following:

    See the Percona XtraDB Cluster 8.0.25-15.1 Release notes for the latest information.

    "},{"location":"nbo.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"online-schema-upgrade.html","title":"Online schema upgrade","text":"

    Database schemas must change as applications change. For a cluster, the schema upgrade must occur while the system is online. A synchronous cluster requires all active nodes have the same data. Schema updates are performed using Data Definition Language (DDL) statements, such as ALTER TABLE <table_name> DROP COLUMN <column_name>.

    The DDL statements are non-transactional, so these statements use up-front locking to avoid the chance of deadlocks and cannot be rolled back. We recommend that you test your schema changes, especially if you must run an ALTER statement on large tables. Verify the backups before updating the schemas in the production environment. A failure in a schema change can cause your cluster to drop nodes and lose data.

    Percona XtraDB Cluster supports the following methods for making online schema changes:

    Method Name Reason for use Description TOI or Total Order Isolation Consistency is important. Other transactions are blocked while the cluster processes the DDL statements. This is the default method for the wsrep-OSU-method variable. The isolation of the DDL statement guarantees consistency. The DDL replication uses a Statement format. Each node processes the replicated DDL statement at same position in the replication stream. All other writes must wait until the DDL statement is executed. While a DDL statement is running, any long-running transactions in progress and using the same resource receive a deadlock error at commit and are rolled back. The pt-online-schema-change in the Percona Toolkit can alter the table without using locks. There are limitations: only InnoDB tables can be altered, and the wsrep_OSU_method must be TOI. RSU or Rolling Schema Upgrade This method guarantees high availability during the schema upgrades. The node desynchronizes with the cluster and disables flow control during the execution of the DDL statement. The rest of the cluster is not affected. After the statement execution, the node applies delayed events and synchronizes with the cluster. Although the cluster is active, during the process some nodes have the newer schema and some nodes have the older schema. The RSU method is a manual operation. For this method, the gcache must be large enough to store the data for the duration of the DDL change. NBO or Non-Blocking Operation This method is used when consistency is important and uses a more efficient locking strategy. This method is similar to TOI. DDL operations acquire an exclusive metadata lock on the table or schema at a late stage of the operation when updating the table or schema definition. Attempting a State Snapshot Transfer (SST) fails during the NBO operation. This mode uses a more efficient locking strategy and avoids the TOI issue of long-running DDL statements blocking other updates in the cluster."},{"location":"online-schema-upgrade.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"performance-schema-instrumentation.html","title":"Perfomance Schema instrumentation","text":"

    To improve monitoring Percona XtraDB Cluster has implemented an infrastructure to expose Galera instruments (mutexes, cond-variables, files, threads) as a part of PERFORMANCE_SCHEMA.

    Although mutexes and condition variables from wsrep were already part of PERFORMANCE_SCHEMA threads were not.

    Mutexes, condition variables, threads, and files from Galera library also were not part of the PERFORMANCE_SCHEMA.

    You can see the complete list of available instruments by running:

    mysql> SELECT * FROM performance_schema.setup_instruments WHERE name LIKE '%galera%' OR name LIKE '%wsrep%';\n
    Expected output
    +----------------------------------------------------------+---------+-------+\n| NAME                                                     | ENABLED | TIMED |\n+----------------------------------------------------------+---------+-------+\n| wait/synch/mutex/sql/LOCK_wsrep_ready                    | NO      | NO    |\n| wait/synch/mutex/sql/LOCK_wsrep_sst                      | NO      | NO    |\n| wait/synch/mutex/sql/LOCK_wsrep_sst_init                 | NO      | NO    |\n...\n| stage/wsrep/wsrep: in rollback thread                    | NO      | NO    |\n| stage/wsrep/wsrep: aborter idle                          | NO      | NO    |\n| stage/wsrep/wsrep: aborter active                        | NO      | NO    |\n+----------------------------------------------------------+---------+-------+\n73 rows in set (0.00 sec)\n

    Some of the most important are:

    This feature has exposed all the important mutexes, condition variables that lead to lock/threads/files as part of this process.

    Besides exposing file it also tracks write/read bytes like stats for file. These stats are not exposed for Galera files as Galera uses mmap.

    Also, there are some threads that are short-lived and created only when needed especially for SST/IST purpose. They are also tracked but come into PERFORMANCE_SCHEMA tables only if/when they are created.

    Stage Info from Galera specific function which server updates to track state of running thread is also visible in PERFORMANCE_SCHEMA.

    "},{"location":"performance-schema-instrumentation.html#what-is-not-exposed","title":"What is not exposed ?","text":"

    Galera uses customer data-structure in some cases (like STL structures). Mutexes used for protecting these structures which are not part of mainline Galera logic or doesn\u2019t fall in big-picture are not tracked. Same goes with threads that are gcomm library specific.

    Galera maintains a process vector inside each monitor for its internal graph creation. This process vector is 65K in size and there are two such vectors per monitor. That is 128K * 3 = 384K condition variables. These are not tracked to avoid hogging PERFORMANCE_SCHEMA limits and sidelining of the main crucial information.

    "},{"location":"performance-schema-instrumentation.html#use-pxc_cluster_view","title":"Use pxc_cluster_view","text":"

    The pxc_cluster_view - provides a unified view of the cluster. The table is in the Performance_Schema database.

    DESCRIBE pxc_cluster_view;\n

    This table has the following definition:

    Expected output
    +-------------+--------------+------+-----+---------+-------+\n| Field       | Type         | Null | Key | Default | Extra |\n+-------------+--------------+------+-----+---------+-------+\n| HOST_NAME   | char(64)     | NO   |     | NULL    |       |\n| UUID        | char(36)     | NO   |     | NULL    |       |\n| STATUS      | char(64)     | NO   |     | NULL    |       |\n| LOCAL_INDEX | int unsigned | NO   |     | NULL    |       |\n| SEGMENT     | int unsigned | NO   |     | NULL    |       |\n+-------------+--------------+------+-----+---------+-------+\n5 rows in set (0.00 sec)\n

    To view the table, run the following query:

    SELECT * FROM pxc_cluster_view;\n
    Expected output
    +-----------+--------------------------------------+--------+-------------+---------+\n| HOST_NAME | UUID                                 | STATUS | LOCAL_INDEX | SEGMENT |\n+-----------+--------------------------------------+--------+-------------+---------+\n| node1     | 22b9d47e-c215-11eb-81f7-7ed65a9d253b | SYNCED |           0 |       0 |\n| node3     | 29c51cf5-c216-11eb-9101-1ba3a28e377a | SYNCED |           1 |       0 |\n| node2     | 982cdb03-c215-11eb-9865-0ae076a59c5c | SYNCED |           2 |       0 |\n+-----------+--------------------------------------+--------+-------------+---------+\n3 rows in set (0.00 sec)\n
    "},{"location":"performance-schema-instrumentation.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"proxysql-v2.html","title":"ProxySQL admin utilities","text":"

    The ProxySQL and ProxySQL admin utilities documentation provides information on installing and running ProxySQL 1.x.x or ProxySQL 2.x.x with the following ProxySQL admin utilities:

    "},{"location":"proxysql-v2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"quickstart-overview.html","title":"Quickstart Guide for Percona XtraDB Cluster","text":"

    Percona XtraDB Cluster (PXC) is a 100% open source, enterprise-grade, highly available clustering solution for MySQL multi-master setups based on Galera. PXC helps enterprises minimize unexpected downtime and data loss, reduce costs, and improve performance and scalability of your database environments supporting your critical business applications in the most demanding public, private, and hybrid cloud environments.

    "},{"location":"quickstart-overview.html#install-percona-xtradb-cluster","title":"Install Percona XtraDB Cluster","text":"

    You can install Percona XtraDB Cluster using different methods.

    "},{"location":"quickstart-overview.html#for-superior-and-optimized-performance","title":"For superior and optimized performance","text":"

    Percona Server for MySQL (PS) is a freely available, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior and optimized performance, greater scalability and availability, enhanced backups, increased visibility, and instrumentation. Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads.

    Install Percona Server for MySQL.

    "},{"location":"quickstart-overview.html#for-backups-and-restores","title":"For backups and restores","text":"

    Percona XtraBackup (PXB) is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL\u00ae that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems. Maintain fully available applications during planned maintenance windows with Percona XtraBackup.

    Install Percona XtraBackup

    "},{"location":"quickstart-overview.html#for-monitoring-and-management","title":"For Monitoring and Management","text":"

    Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details.

    Install PMM and connect your MySQL instances to it.

    "},{"location":"quickstart-overview.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"restarting-nodes.html","title":"Restart the cluster nodes","text":"

    To restart a cluster node, shut down MySQL and restarting it. The node should leave the cluster (and the total vote count for quorum should decrement).

    When it rejoins, the node should synchronize using IST. If the set of changes needed for IST are not found in the gcache file on any other node in the entire cluster, then SST will be performed instead. Therefore, restarting cluster nodes for rolling configuration changes or software upgrades is rather simple from the cluster\u2019s perspective.

    Note

    If you restart a node with an invalid configuration change that prevents MySQL from loading, Galera will drop the node\u2019s state and force an SST for that node.

    Note

    If MySQL fails for any reason, it will not remove its PID file (which is by design deleted only on clean shutdown). Obviously server will not restart if existing PID file is present. So in case of encountered MySQL failure for any reason with the relevant records in log, PID file should be removed manually.

    "},{"location":"restarting-nodes.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"secure-network.html","title":"Secure the network","text":"

    By default, anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. This could potentially let them query your data or get a complete copy of it.

    In general, it is a good idea to disable all remote connections to Percona XtraDB Cluster nodes. If you require clients or nodes from outside of your network to connect, you can set up a VPN (virtual private network) for this purpose.

    "},{"location":"secure-network.html#firewall-configuration","title":"Firewall configuration","text":"

    A firewall can let you filter Percona XtraDB Cluster traffic based on the clients and nodes that you trust.

    By default, Percona XtraDB Cluster nodes use the following ports:

    Ideally you want to make sure that these ports on each node are accessed only from trusted IP addresses. You can implement packet filtering using iptables, firewalld, pf, or any other firewall of your choice.

    "},{"location":"secure-network.html#use-iptables","title":"Use iptables","text":"

    To restrict access to Percona XtraDB Cluster ports using iptables, you need to append new rules to the INPUT chain on the filter table. In the following example, the trusted range of IP addresses is 192.168.0.1/24. It is assumed that only Percona XtraDB Cluster nodes and clients will connect from these IPs. To enable packet filtering, run the commands as root on each Percona XtraDB Cluster node.

    # iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 3306 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 4444 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 4567 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol tcp --match tcp --dport 4568 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n# iptables --append INPUT --in-interface eth0 \\\n   --protocol udp --match udp --dport 4567 \\\n   --source 192.168.0.1/24 --jump ACCEPT\n

    Note

    The last one opens port 4567 for multicast replication over UDP.

    If the trusted IPs are not in sequence, you will need to run these commands for each address on each node. In this case, you can consider to open all ports between trusted hosts. This is a little bit less secure, but reduces the amount of commands. For example, if you have three Percona XtraDB Cluster nodes, you can run the following commands on each one:

    # iptables --append INPUT --protocol tcp \\\n    --source 64.57.102.34 --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n    --source 193.166.3.20  --jump ACCEPT\n# iptables --append INPUT --protocol tcp \\\n    --source 193.125.4.10  --jump ACCEPT\n

    Running the previous commands will allow TCP connections from the IP addresses of the other Percona XtraDB Cluster nodes.

    Note

    The changes that you make in iptables are not persistent unless you save the packet filtering state:

    # service save iptables\n

    For distributions that use systemd, you need to save the current packet filtering rules to the path where iptables reads from when it starts. This path can vary by distribution, but it is usually in the /etc directory. For example:

    Use iptables-save to update the file:

    # iptables-save > /etc/sysconfig/iptables\n
    "},{"location":"secure-network.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"security-index.html","title":"Security basics","text":"

    By default, Percona XtraDB Cluster does not provide any protection for stored data. There are several considerations to take into account for securing Percona XtraDB Cluster:

    Anyone with access to your network can connect to any Percona XtraDB Cluster node either as a client or as another node joining the cluster. You should consider restricting access using VPN and filter traffic on ports used by Percona XtraDB Cluster.

    Unencrypted traffic can potentially be viewed by anyone monitoring your network. In Percona XtraDB Cluster 8.0 traffic encryption is enabled by default.

    Percona XtraDB Cluster supports tablespace encryption to provide at-rest encryption for physical tablespace data files.

    For more information, see the following blog post:

      * [MySQL Data at Rest Encryption](https://www.percona.com/blog/2016/04/08/mysql-data-at-rest-encryption/)\n
    "},{"location":"security-index.html#security-modules","title":"Security modules","text":"

    Most modern distributions include special security modules that control access to resources for users and applications. By default, these modules will most likely constrain communication between Percona XtraDB Cluster nodes.

    The easiest solution is to disable or remove such programs, however, this is not recommended for production environments. You should instead create necessary security policies for Percona XtraDB Cluster.

    "},{"location":"security-index.html#selinux","title":"SELinux","text":"

    SELinux is usually enabled by default in Red Hat Enterprise Linux and derivatives (including CentOS). SELinux helps protects the user\u2019s home directory data and provides the following:

    To help with troubleshooting, during installation and configuration, you can set the mode to permissive:

    $ setenforce 0\n

    Note

    This action changes the mode only at runtime.

    See also

    For more information, see Enabling AppArmor

    "},{"location":"security-index.html#apparmor","title":"AppArmor","text":"

    AppArmor is included in Debian and Ubuntu. Percona XtraDB Cluster contains several AppArmor profiles which allows for easier maintenance. To help with troubleshooting, during the installation and configuration, you can set the mode to complain for mysqld.

    See also

    For more information, see Enabling AppArmor

    "},{"location":"security-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"selinux.html","title":"Enable SELinux","text":"

    SELinux helps protects the user\u2019s home directory data. SELinux provides the following:

    For more information, see Percona Server and SELinux

    Red Hat and CentOS distributes a policy module to extend the SELinux policy module for mysqld. We provide the following:

    "},{"location":"selinux.html#modify-policies","title":"Modify policies","text":"

    Modifications described in Percona Server and SELinux can also be applied for Percona XtraDB Cluster.

    To adjust PXC-specific configurations, especially SST/IST ports, use the following procedures as root:

    To enable port 14567 instead of the default port 4567:

    Find the tag associated with the 4567 port:

    $ semanage port -l | grep 4567\ntram_port_t tcp 4567\n

    Run a command to find which rules grant mysqld access to the port:

    $ sesearch -A -s mysqld_t -t tram_port_t -c tcp_socket\nFound 5 semantic av rules:\n    allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n    allow mysqld_t tram_port_t : tcp_socket { name_bind name_connect } ;\n    allow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\n    allow mysqld_t port_type : tcp_socket name_connect ;\n    allow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n

    You could tag port 14567 with the tramp_port_t tag, but this tag may cause issues because port 14567 is not a TRAM port. Use the general mysqld_port_t tag to add ports. For example, the following command adds port 14567 to the policy module with the mysqld_port_t tag.

    $ semanage port -a -t mysqld_port_t -p tcp 14567\n

    You can verify the addition with the following command:

    $ semanage port -l | grep 14567\nmysqld_port_t                  tcp      4568, 14567, 1186, 3306, 63132-63164\n

    To see the tag associated with the 4444 port, run the following command:

    $ semanage port -l | grep 4444\nkerberos_port_t                tcp      88, 750, 4444\nkerberos_port_t                udp      88, 750, 4444\n

    To find the rules associated with kerberos_port_t, run the following:

    $ sesearch -A -s mysqld_t -t kerberos_port_t -c tcp_socket\nFound 9 semantic av rules:\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t rpc_port_type : tcp_socket name_bind ;\nallow mysqld_t port_type : tcp_socket { recv_msg send_msg } ;\nallow mysqld_t port_type : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket name_connect ;\nallow nsswitch_domain kerberos_port_t : tcp_socket { recv_msg send_msg } ;\nallow nsswitch_domain reserved_port_type : tcp_socket name_connect ;\nallow mysqld_t reserved_port_type : tcp_socket name_connect ;\nallow nsswitch_domain port_type : tcp_socket { recv_msg send_msg } ;\n

    If you require port 14444 added, use the same method used to add port 14567.

    If you must use a port that is already tagged, you can use either of the following ways:

    "},{"location":"selinux.html#work-with-pxc_encrypt_cluster_traffic","title":"Work with pxc_encrypt_cluster_traffic","text":"

    By default, the pxc_encrypt_cluster_traffic is ON, which means that all cluster traffic is protected with certificates. However, these certificates cannot be located in the data directory since that location is overwritten during the SST process.

    Review How to set up the certificates. When SELinux is enabled, mysqld must have access to these certificates. The following items must be checked or considered:

    $ restorecon -v /etc/mysql/certs/*\n
    "},{"location":"selinux.html#enable-enforcing-mode-for-pxc","title":"Enable enforcing mode for PXC","text":"

    The process, mysqld, runs in permissive mode, by default, even if SELinux runs in enforcing mode:

    $ semodule -l | grep permissive\npermissive_mysqld_t\npermissivedomains\n

    After ensuring that the system journal does not list any issues, the administrator can remove the permissive mode for mysqld_t:

    $ semanage permissive -d mysqld_t\n

    See also

    MariaDB 10.2 Galera Cluster with SELinux-enabled on CentOS 7

    "},{"location":"selinux.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"set-up-3nodes-ec2.html","title":"How to set up a three-node cluster in EC2 environment","text":"

    This manual assumes you are running three EC2 instances with Red Hat Enterprise Linux 7 64-bit.

    "},{"location":"set-up-3nodes-ec2.html#recommendations-on-launching-ec2-instances","title":"Recommendations on launching EC2 instances","text":"
    1. Select instance types that support Enhanced Networking functionality. Good network performance critical for synchronous replication used in Percona XtraDB Cluster.

    2. When adding instance storage volumes, choose the ones with good I/O performance:

      • instances with NVMe are preferred

      • GP2 SSD are preferred to GP3 SSD volume types due to I/O latency

      • over sized GP2 SSD are preferred to IO1 volume types due to cost

    3. Attach Elastic network interfaces with static IPs or assign Elastic IP addresses to your instances. Thereby IP addresses are preserved on instances in case of reboot or restart. This is required as each Percona XtraDB Cluster member includes the wsrep_cluster_address option in its configuration which points to other cluster members.

    4. Launch instances in different availability zones to avoid cluster downtime in case one of the zones experiences power loss or network connectivity issues.

      See also

      Amazon EC2 Documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

    To set up Percona XtraDB Cluster:

    1. Remove Percona XtraDB Cluster and Percona Server for MySQL packages for older versions:

      • Percona XtraDB Cluster 5.6, 5.7

      • Percona Server for MySQL 5.6, 5.7

    2. Install Percona XtraDB Cluster as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS.

    3. Create data directories:

      $ mkdir -p /mnt/data\n$ mysql_install_db --datadir=/mnt/data --user=mysql\n
    4. Stop the firewall service:

      $ service iptables stop\n

      Note

      Alternatively, you can keep the firewall running, but open ports 3306, 4444, 4567, 4568. For example to open port 4567 on 192.168.0.1:

      $ iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT\n
    5. Create /etc/my.cnf files:

      Contents of the configuration file on the first node:

      [mysqld]\ndatadir=/mnt/data\nuser=mysql\n\nbinlog_format=ROW\n\nwsrep_provider=/usr/lib64/libgalera_smm.so\nwsrep_cluster_address=gcomm://10.93.46.58,10.93.46.59,10.93.46.60\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node1\n\ninnodb_autoinc_lock_mode=2\n

      For the second and third nodes change the following lines:

      wsrep_node_name=node2\n\nwsrep_node_name=node3\n
    6. Start and bootstrap Percona XtraDB Cluster on the first node:

      [root@pxc1 ~]# systemctl start mysql@bootstrap.service\n
      Expected output
      2014-01-30 11:52:35 23280 [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n
    7. Start the second and third nodes:

      $ sudo systemctl start mysql\n
      Expected output
      ... [Note] WSREP: Flow-control interval: [28, 28]\n... [Note] WSREP: Restored state OPEN -> JOINED (2)\n... [Note] WSREP: Member 2 (percona1) synced with group.\n... [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n... [Note] WSREP: New cluster view: global state: 4827a206-876b-11e3-911c-3e6a77d54953:2, view# 7: Primary, number of nodes: 3, my index: 2, protocol version 2\n... [Note] WSREP: SST complete, seqno: 2\n... [Note] Plugin 'FEDERATED' is disabled.\n... [Note] InnoDB: The InnoDB memory heap is disabled\n... [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins\n... [Note] InnoDB: Compressed tables use zlib 1.2.3\n... [Note] InnoDB: Using Linux native AIO\n... [Note] InnoDB: Not using CPU crc32 instructions\n... [Note] InnoDB: Initializing buffer pool, size = 128.0M\n... [Note] InnoDB: Completed initialization of buffer pool\n... [Note] InnoDB: Highest supported file format is Barracuda.\n... [Note] InnoDB: 128 rollback segment(s) are active.\n... [Note] InnoDB: Waiting for purge to start\n... [Note] InnoDB:  Percona XtraDB (http://www.percona.com) ... started; log sequence number 1626341\n... [Note] RSA private key file not found: /var/lib/mysql//private_key.pem. Some authentication plugins will not work.\n... [Note] RSA public key file not found: /var/lib/mysql//public_key.pem. Some authentication plugins will not work.\n... [Note] Server hostname (bind-address): '*'; port: 3306\n... [Note] IPv6 is available.\n... [Note]   - '::' resolves to '::';\n... [Note] Server socket created on IP: '::'.\n... [Note] Event Scheduler: Loaded 0 events\n... [Note] /usr/sbin/mysqld: ready for connections.\nVersion: '...'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release ..., Revision ..., wsrep_version\n... [Note] WSREP: inited wsrep sidno 1\n... [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.\n... [Note] WSREP: REPL Protocols: 5 (3, 1)\n... [Note] WSREP: Assign initial position for certification: 2, protocol version: 3\n... [Note] WSREP: Service thread queue flushed.\n... [Note] WSREP: Synchronized with group, ready for connections\n

      When all nodes are in SYNCED state, your cluster is ready.

    8. You can try connecting to MySQL on any node and create a database:

      $ mysql -uroot\n> CREATE DATABASE hello_tom;\n
      The new database will be propagated to all nodes.

    "},{"location":"set-up-3nodes-ec2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"singlebox.html","title":"How to set up a three-node cluster on a single box","text":"

    This tutorial describes how to set up a 3-node cluster on a single physical box.

    For the purposes of this tutorial, assume the following:

    To set up the cluster:

    1. Create three MySQL configuration files for the corresponding nodes:

      • /etc/my.4000.cnf
      [mysqld]\nport = 4000\nsocket=/tmp/mysql.4000.sock\ndatadir=/data/bench/d1\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:5030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:4020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:4030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node4000\ninnodb_autoinc_lock_mode=2\n
      • /etc/my.5000.cnf
      [mysqld]\nport = 5000\nsocket=/tmp/mysql.5000.sock\ndatadir=/data/bench/d2\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:6030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:5020\nwsrep_node_incoming_address=192.168.2.21\n\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:5030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node5000\ninnodb_autoinc_lock_mode=2\n
      • /etc/my.6000.cnf
      [mysqld]\nport = 6000\nsocket=/tmp/mysql.6000.sock\ndatadir=/data/bench/d3\nbasedir=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64\nuser=mysql\nlog_error=error.log\nbinlog_format=ROW\nwsrep_cluster_address='gcomm://192.168.2.21:4030,192.168.2.21:5030'\nwsrep_provider=/usr/local/Percona-XtraDB-Cluster-8.0.x86_64/lib/libgalera_smm.so\nwsrep_sst_receive_address=192.168.2.21:6020\nwsrep_node_incoming_address=192.168.2.21\nwsrep_cluster_name=trimethylxanthine\nwsrep_provider_options = \"gmcast.listen_addr=tcp://192.168.2.21:6030;\"\nwsrep_sst_method=xtrabackup-v2\nwsrep_node_name=node6000\ninnodb_autoinc_lock_mode=2\n
    2. Create three data directories for the nodes:

      • /data/bench/d1

      • /data/bench/d2

      • /data/bench/d3

    3. Start the first node using the following command (from the Percona XtraDB Cluster install directory):

      $ bin/mysqld_safe --defaults-file=/etc/my.4000.cnf --wsrep-new-cluster\n

      If the node starts correctly, you should see the following output:

      Expected output
      111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)\n111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 1\n

      To check the ports, run the following command:

      $ netstat -anp | grep mysqld\ntcp        0      0 192.168.2.21:4030           0.0.0.0:*                   LISTEN      21895/mysqld\ntcp        0      0 0.0.0.0:4000                0.0.0.0:*                   LISTEN      21895/mysqld\n
    4. Start the second and third nodes:

      $ bin/mysqld_safe --defaults-file=/etc/my.5000.cnf\n$ bin/mysqld_safe --defaults-file=/etc/my.6000.cnf\n

      If the nodes start and join the cluster successful, you should see the following output:

      111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)\n111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)\n111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections\n

      To check the cluster size, run the following command:

      $ mysql -h127.0.0.1 -P6000 -e \"show global status like 'wsrep_cluster_size';\"\n
      Expected output
      +--------------------+-------+\n| Variable_name      | Value |\n+--------------------+-------+\n| wsrep_cluster_size | 3     |\n+--------------------+-------+\n

      After that you can connect to any node and perform queries, which will be automatically synchronized with other nodes. For example, to create a database on the second node, you can run the following command:

      $ mysql -h127.0.0.1 -P5000 -e \"CREATE DATABASE hello_peter\"\n
    "},{"location":"singlebox.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"state-snapshot-transfer.html","title":"State snapshot transfer","text":"

    State Snapshot Transfer (SST) is a full data copy from one node (donor) to the joining node (joiner). It\u2019s used when a new node joins the cluster. In order to be synchronized with the cluster, the new node has to receive data from a node that is already part of the cluster.

    Percona XtraDB Cluster enables via xtrabackup.

    Xtrabackup SST uses backup locks, which means the Galera provider is not paused at all as with earlier. The SST method can be configured using the wsrep_sst_method variable.

    Note

    If the gcs.sync_donor variable is set to Yes (default is No), the whole cluster will get blocked if the donor is blocked by SST.

    "},{"location":"state-snapshot-transfer.html#choose-the-sst-donor","title":"Choose the SST Donor","text":"

    If there are no nodes available that can safely perform incremental state transfer (IST), the cluster defaults to SST.

    If there are nodes available that can perform IST, the cluster prefers a local node over remote nodes to serve as the donor.

    If there are no local nodes available that can perform IST, the cluster chooses a remote node to serve as the donor.

    If there are several local and remote nodes that can perform IST, the cluster chooses the node with the highest seqno to serve as the donor.

    "},{"location":"state-snapshot-transfer.html#use-percona-xtrabackup","title":"Use Percona Xtrabackup","text":"

    The default SST method is xtrabackup-v2 which uses Percona XtraBackup. This is the least blocking method that leverages backup locks. XtraBackup is run locally on the donor node.

    The datadir needs to be specified in the server configuration file my.cnf, otherwise the transfer process will fail.

    Detailed information on this method is provided in Percona XtraBackup SST Configuration documentation.

    "},{"location":"state-snapshot-transfer.html#sst-for-tables-with-tablespaces-that-are-not-in-the-data-directory","title":"SST for tables with tablespaces that are not in the data directory","text":"

    For example:

    CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory';\n
    "},{"location":"state-snapshot-transfer.html#sst-using-percona-xtrabackup","title":"SST using Percona XtraBackup","text":"

    XtraBackup will restore the table to the same location on the joiner node. If the target directory does not exist, it will be created. If the target file already exists, an error will be returned, because XtraBackup cannot clear tablespaces not in the data directory.

    "},{"location":"state-snapshot-transfer.html#other-reading","title":"Other reading","text":""},{"location":"state-snapshot-transfer.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"strict-mode.html","title":"Percona XtraDB Cluster strict mode","text":"

    The Percona XtraDB Cluster (PXC) Strict Mode is designed to avoid the use of tech preview features and unsupported features in PXC. This mode performs a number of validations at startup and during runtime.

    Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:

    By default, PXC Strict Mode is set to ENFORCING, except if the node is acting as a standalone server or the node is bootstrapping, then PXC Strict Mode defaults to DISABLED.

    It is recommended to keep PXC Strict Mode set to ENFORCING, because in this case whenever Percona XtraDB Cluster encounters a tech preview feature or an unsupported operation, the server will deny it. This will force you to re-evaluate your Percona XtraDB Cluster configuration without risking the consistency of your data.

    If you are planning to set PXC Strict Mode to anything else than ENFORCING, you should be aware of the limitations and effects that this may have on data integrity. For more information, see Validations.

    To set the mode, use the pxc_strict_mode variable in the configuration file or the --pxc-strict-mode option during mysqld startup.

    Note

    It is better to start the server with the necessary mode (the default ENFORCING is highly recommended). However, you can dynamically change it during runtime. For example, to set PXC Strict Mode to PERMISSIVE, run the following command:

    mysql> SET GLOBAL pxc_strict_mode=PERMISSIVE;\n

    Note

    To further ensure data consistency, it is important to have all nodes in the cluster running with the same configuration, including the value of pxc_strict_mode variable.

    "},{"location":"strict-mode.html#validations","title":"Validations","text":"

    PXC Strict Mode validations are designed to ensure optimal operation for common cluster setups that do not require tech preview features and do not rely on operations not supported by Percona XtraDB Cluster.

    Warning

    If an unsupported operation is performed on a node with pxc_strict_mode set to DISABLED or PERMISSIVE, it will not be validated on nodes where it is replicated to, even if the destination node has pxc_strict_mode set to ENFORCING.

    This section describes the purpose and consequences of each validation.

    "},{"location":"strict-mode.html#group-replication","title":"Group replication","text":"

    Group replication is a feature of MySQL that provides distributed state machine replication with strong coordination between servers. It is implemented as a plugin which, if activated, may conflict with PXC. Group replication cannot be activated to run alongside PXC. However, you can migrate to PXC from the environment that uses group replication.

    For the strict mode to work correctly, make sure that the group replication plugin is not active. In fact, if pxc_strict_mode is set to ENFORCING or MASTER, the server will stop with an error:

    Error message with pxc_strict_mode set to ENFORCING or MASTER

    The error message
    Group replication cannot be used with PXC in strict mode.\n

    If pxc_strict_mode is set to DISABLED you can use group replication at your own risk. Setting pxc_strict_mode to PERMISSIVE will result in a warning.

    Warning message with pxc_strict_mode set to PERMISSIVE

    Warning message
    Using group replication with PXC is only supported for migration. Please\nmake sure that group replication is turned off once all data is migrated to PXC.\n
    "},{"location":"strict-mode.html#storage-engine","title":"Storage engine","text":"

    Percona XtraDB Cluster currently supports replication only for tables that use a transactional storage engine (XtraDB or InnoDB). To ensure data consistency, the following statements should not be allowed for tables that use a non-transactional storage engine (MyISAM, MEMORY, CSV, and others):

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on an unsupported table.

    ENFORCING or MASTER

    At startup, no validation is performed.

    At runtime, any undesirable operation performed on an unsupported table is denied and an error is logged.

    Note

    Unsupported tables can be converted to use a supported storage engine.

    "},{"location":"strict-mode.html#myisam-replication","title":"MyISAM replication","text":"

    Percona XtraDB Cluster provides support for replication of tables that use the MyISAM storage engine. The use of the MyISAM storage engine in a cluster is not recommended and if you use the storage engine, this is your own risk. Due to the non-transactional nature of MyISAM, the storage engine is not fully-supported in Percona XtraDB Cluster.

    MyISAM replication is controlled using the wsrep_replicate_myisam variable, which is set to OFF by default. Due to its unreliability, MyISAM replication should not be enabled if you want to ensure data consistency.

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, you can set wsrep_replicate_myisam to any value.

    PERMISSIVE

    At startup, if wsrep_replicate_myisam is set to ON, a warning is logged and startup continues.

    At runtime, it is permitted to change wsrep_replicate_myisam to any value, but if you set it to ON, a warning is logged.

    ENFORCING or MASTER

    At startup, if wsrep_replicate_myisam is set to ON, an error is logged and startup is aborted.

    At runtime, any attempt to change wsrep_replicate_myisam to ON fails and an error is logged.

    Note

    The wsrep_replicate_myisam variable controls replication for MyISAM tables, and this validation only checks whether it is allowed. Undesirable operations for MyISAM tables are restricted using the Storage engine validation.

    "},{"location":"strict-mode.html#binary-log-format","title":"Binary log format","text":"

    Percona XtraDB Cluster supports only the default row-based binary logging format. In 8.0, setting the binlog_format variable to anything but ROW at startup or runtime is not allowed regardless of the value of the pxc_strict_mode variable.

    "},{"location":"strict-mode.html#tables-without-primary-keys","title":"Tables without primary keys","text":"

    Percona XtraDB Cluster cannot properly propagate certain write operations to tables that do not have primary keys defined. Undesirable operations include data manipulation statements that perform writing to table (especially DELETE).

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed on a table without an explicit primary key defined.

    ENFORCING or MASTER

    At startup, no validation is performed.

    At runtime, any undesirable operation performed on a table without an explicit primary key is denied and an error is logged.

    "},{"location":"strict-mode.html#log-output","title":"Log output","text":"

    Percona XtraDB Cluster does not support tables in the MySQL database as the destination for log output. By default, log entries are written to file. This validation checks the value of the log_output variable.

    Depending on the selected mode, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, you can set log_output to any value.

    PERMISSIVE

    At startup, if log_output is set only to TABLE, a warning is logged and startup continues.

    At runtime, it is permitted to change log_output to any value, but if you set it only to TABLE, a warning is logged.

    ENFORCING or MASTER

    At startup, if log_output is set only to TABLE, an error is logged and startup is aborted.

    At runtime, any attempt to change log_output only to TABLE fails and an error is logged.

    "},{"location":"strict-mode.html#explicit-table-locking","title":"Explicit table locking","text":"

    Percona XtraDB Cluster provides only the tech-preview-level of support for explicit table locking operations, The following undesirable operations lead to explicit table locking and are covered by this validation:

    Depending on the selected mode, the following happens:

    DISABLED or MASTER

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when an undesirable operation is performed.

    ENFORCING

    At startup, no validation is performed.

    At runtime, any undesirable operation is denied and an error is logged.

    "},{"location":"strict-mode.html#auto-increment-lock-mode","title":"Auto-increment lock mode","text":"

    The lock mode for generating auto-increment values must be interleaved to ensure that each node generates a unique (but non-sequential) identifier.

    This validation checks the value of the innodb_autoinc_lock_mode variable. By default, the variable is set to 1 (consecutive lock mode), but it should be set to 2 (interleaved lock mode).

    Depending on the strict mode selected, the following happens:

    DISABLED

    At startup, no validation is performed.

    PERMISSIVE

    At startup, if innodb_autoinc_lock_mode is not set to 2, a warning is logged and startup continues.

    ENFORCING or MASTER

    At startup, if innodb_autoinc_lock_mode is not set to 2, an error is logged and startup is aborted.

    Note

    This validation is not performed during runtime, because the innodb_autoinc_lock_mode variable cannot be set dynamically.

    "},{"location":"strict-mode.html#combine-schema-and-data-changes-in-a-single-statement","title":"Combine schema and data changes in a single statement","text":"

    With strict mode set to ENFORCING, Percona XtraDB Cluster does not support statements, because they combine both schema and data changes. Note that tables in the SELECT clause should be present on all replication nodes.

    With strict mode set to PERMISSIVE or DISABLED, CREATE TABLE \u2026 AS SELECT (CTAS) statements are replicated using the method to ensure consistency.

    In Percona XtraDB Cluster 5.7, CREATE TABLE \u2026 AS SELECT (CTAS) statements were replicated using DML write-sets when strict mode was set to PERMISSIVE or DISABLED.

    Important

    MyISAM tables are created and loaded even if wsrep_replicate_myisam equals to 1. Percona XtraDB Cluster does not recommend using the MyISAM storage engine. The support for MyISAM may be removed in a future release.

    See also

    MySQL Bug System: XID inconsistency on master-slave with CTAS https://bugs.mysql.com/bug.php?id=93948

    Depending on the strict mode selected, the following happens:

    Mode Behavior DISABLED At startup, no validation is performed. At runtime, all operations are permitted. PERMISSIVE At startup, no validation is performed. At runtime, all operations are permitted, but a warning is logged when a CREATE TABLE \u2026 AS SELECT (CTAS) operation is performed. ENFORCING At startup, no validation is performed. At runtime, any CTAS operation is denied and an error is logged.

    Important

    Although CREATE TABLE \u2026 AS SELECT (CTAS) operations for temporary tables are permitted even in STRICT mode, temporary tables should not be used as source tables in CREATE TABLE \u2026 AS SELECT (CTAS) operations due to the fact that temporary tables are not present on all nodes.

    If node-1 has a temporary and a non-temporary table with the same name, CREATE TABLE \u2026 AS SELECT (CTAS) on node-1 will use temporary and CREATE TABLE \u2026 AS SELECT (CTAS) on node-2 will use the non-temporary table resulting in a data level inconsistency.

    "},{"location":"strict-mode.html#discard-and-import-tablespaces","title":"Discard and import tablespaces","text":"

    DISCARD TABLESPACE and IMPORT TABLESPACE are not replicated using TOI. This can lead to data inconsistency if executed on only one node.

    Depending on the strict mode selected, the following happens:

    DISABLED

    At startup, no validation is performed.

    At runtime, all operations are permitted.

    PERMISSIVE

    At startup, no validation is performed.

    At runtime, all operations are permitted, but a warning is logged when you discard or import a tablespace.

    ENFORCING

    At startup, no validation is performed.

    At runtime, discarding or importing a tablespace is denied and an error is logged.

    "},{"location":"strict-mode.html#major-version-check","title":"Major version check","text":"

    This validation checks that the protocol version is the same as the server major version. This validation protects the cluster against writes attempted on already upgraded nodes.

    Expected output
    ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of multiple major versions while accepting write workload with pxc_strict_mode = ENFORCING or MASTER\n
    "},{"location":"strict-mode.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"tarball.html","title":"Install Percona XtraDB Cluster from Binary Tarball","text":"

    Percona provides generic tarballs with all required files and binaries for manual installation.

    You can download the appropriate tarball package from https://www.percona.com/downloads/Percona-XtraDB-Cluster-80

    "},{"location":"tarball.html#version-updates","title":"Version updates","text":"

    Starting with Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section lists only full or minimal tar files. Each tarball file replaces the multiple tar file listing used in earlier versions and supports all distributions.

    Important

    Starting with Percona XtraDB Cluster 8.0.21, Percona does not provide a tarball for RHEL 6/CentOS 6 (glibc2.12).

    The version number in the tarball name must be substituted with the appropriate version number for your system. To indicate that such a substitution is needed in statements, we use <version-number>.

    Name Type Description Percona-XtraDB-Cluster_-Linux.x86_64.glibc2.17.tar.gz Full Contains binary files, libraries, test files, and debug symbols Percona-XtraDB-Cluster_-Linux.x86_64.glibc2.17.minimal.tar.gz Minimal Contains binary files and libraries but does not include test files, or debug symbols

    For installations before Percona XtraDB Cluster 8.0.20-11, the Linux - Generic section contains multiple tarballs based on the operating system names:

    Percona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.bionic.tar.gz\nPercona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.buster.tar.gz\n...\n

    For example, you can use curl as follows:

    $ curl -O https://downloads.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-8.0.27/binary/tarball/Percona-XtraDB-Cluster_8.0.27-18.1_Linux.x86_64.glibc2.17-minimal.tar.gz\n

    Check your system to make sure the packages that the PXC version requires are installed.

    "},{"location":"tarball.html#for-debian-or-ubuntu","title":"For Debian or Ubuntu:","text":"
    $ sudo apt-get install -y \\\nsocat libdbd-mysql-perl \\\nlibaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \\\nlibgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1\n
    "},{"location":"tarball.html#for-red-hat-enterprise-linux-or-centos","title":"For Red Hat Enterprise Linux or CentOS:","text":"
    $ sudo yum install -y openssl socat  \\\nprocps-ng chkconfig procps-ng coreutils shadow-utils \\\n
    "},{"location":"tarball.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"telemetry.html","title":"Telemetry on Percona XtraDB Cluster","text":"

    Percona telemetry fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer to not share this information.

    "},{"location":"telemetry.html#what-information-is-collected","title":"What information is collected","text":"

    At this time, telemetry is added only to the Percona packages and Docker images. Percona XtraDB Cluster collects only information about the installation environment. Future releases may add additional metrics.

    Be assured that access to this raw data is rigorously controlled. Percona does not collect personal data. All data is anonymous and cannot be traced to a specific user. To learn more about our privacy practices, read our Percona Privacy statement.

    An example of the data collected is the following:

    [{\"id\" : \"c416c3ee-48cd-471c-9733-37c2886f8231\",\n\"product_family\" : \"PRODUCT_FAMILY_PXC\",\n\"instanceId\" : \"6aef422e-56a7-4530-af9d-94cc02198343\",\n\"createTime\" : \"2023-10-16T10:46:23Z\",\n\"metrics\":\n[{\"key\" : \"deployment\",\"value\" : \"PACKAGE\"},\n{\"key\" : \"pillar_version\",\"value\" : \"8.0.34-26\"},\n{\"key\" : \"OS\",\"value\" : \"Oracle Linux Server 8.8\"},\n{\"key\" : \"hardware_arch\",\"value\" : \"x86_64 x86_64\"}]}]\n
    "},{"location":"telemetry.html#disable-telemetry","title":"Disable telemetry","text":"

    Starting with Percona XtraDB Cluster 8.0.34-26-1, telemetry is enabled by default. If you decide not to send usage data to Percona, you can set the PERCONA_TELEMETRY_DISABLE=1 environment variable for either the root user or in the operating system prior to the installation process.

    Debian-derived distributionRed Hat-derived distributionDOCKER

    Add the environment variable before the install process.

    $ sudo PERCONA_TELEMETRY_DISABLE=1 apt install percona-xtradb-cluster\n

    Add the environment variable before the install process.

    $ sudo PERCONA_TELEMETRY_DISABLE=1 yum install percona-xtradb-cluster\n

    Add the environment variable when running a command in a new container.

    $ docker run -d -e MYSQL_ROOT_PASSWORD=test1234# -e PERCONA_TELEMETRY_DISABLE=1 -e CLUSTER_NAME=pxc-cluster1 --name=pxc-node1 percona/percona-xtradb-cluster:8.0\n
    "},{"location":"telemetry.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"threading-model.html","title":"Percona XtraDB Cluster threading model","text":"

    Percona XtraDB Cluster creates a set of threads to service its operations, which are not related to existing MySQL threads. There are three main groups of threads:

    "},{"location":"threading-model.html#applier-threads","title":"Applier threads","text":"

    Applier threads apply write-sets that the node receives from other nodes. Write messages are directed through gcv_recv_thread.

    The number of applier threads is controlled using the wsrep_slave_threads variable or the wsrep_applier_threads variable. The wsrep_slave_threads variable was deprecated in the Percona XtraDB Cluster 8.0.26-16 release. The default value is 1, which means at least one wsrep applier thread exists to process the request.

    Applier threads wait for an event, and once it gets the event, it applies it using normal replica apply routine path, and relays the log info apply path with wsrep-customization. These threads are similar to replica worker threads (but not exactly the same).

    Coordination is achieved using Apply and Commit Monitor. A transaction passes through two important states: APPLY and COMMIT. Every transaction registers itself with an apply monitor, where its apply order is defined. So all transactions with apply order sequence number (seqno) of less than this transaction\u2019s sequence number, are applied before applying this transaction. The same is done for commit as well (last_left >= trx_.depends_seqno()).

    "},{"location":"threading-model.html#rollback-thread","title":"Rollback thread","text":"

    There is only one rollback thread to perform rollbacks in case of conflicts.

    All the transactions that need to be rolled back are added to the rollback queue, and the rollback thread is notified. The rollback thread then iterates over the queue and performs rollback operations.

    If a transaction is active on a node, and a node receives a transaction write-set from the cluster group that conflicts with the local active transaction, then such local transactions are always treated as a victim transaction to roll back.

    Transactions can be in a commit state or an execution stage when the conflict arises. Local transactions in the execution stage are forcibly killed so that the waiting applier transaction is allowed to proceed. Local transactions in the commit stage fail with a certification error.

    "},{"location":"threading-model.html#other-threads","title":"Other threads","text":""},{"location":"threading-model.html#service-thread","title":"Service thread","text":"

    This thread is created during boot-up and used to perform auxiliary services. It has two main functions:

    "},{"location":"threading-model.html#receiving-thread","title":"Receiving thread","text":"

    The gcs_recv_thread thread is the first one to see all the messages received in a group.

    It will try to assign actions against each message it receives. It adds these messages to a central FIFO queue, which are then processed by the Applier threads. Messages can include different operations like state change, configuration update, flow-control, and so on.

    One important action is processing a write-set, which actually is applying transactions to database objects.

    "},{"location":"threading-model.html#gcomm-connection-thread","title":"Gcomm connection thread","text":"

    The gcomm connection thread GCommConn::run_fn is used to co-ordinate the low-level group communication activity. Think of it as a black box meant for communication.

    "},{"location":"threading-model.html#action-based-threads","title":"Action-based threads","text":"

    Besides the above, some threads are created on a needed basis. SST creates threads for donor and joiner (which eventually forks out a child process to host the needed SST script), IST creates receiver and async sender threads, PageStore creates a background thread for removing the files that were created.

    If the checksum is enabled and the replicated write-set is big enough, the checksum is done as part of a separate thread.

    "},{"location":"threading-model.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"trademark-policy.html","title":"Trademark policy","text":"

    This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.

    Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.

    Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.

    First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.

    Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.

    Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.

    Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.

    Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.

    In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.

    In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.

    "},{"location":"trademark-policy.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"upgrade-from-backup.html","title":"Restore a 5.7 backup to an 8.0 cluster","text":"

    Use Percona XtraBackup to back up the source server data and restore the data to a target server, and then upgrade the server to a different version of Percona XtraDB Cluster.

    Downgrading is not supported.

    "},{"location":"upgrade-from-backup.html#restore-a-database-with-a-different-server-version","title":"Restore a database with a different server version","text":"

    Review Upgrade Percona XtraDB cluster.

    Upgrade the nodes one at a time. The primary node should be the last node to be upgraded. The following steps are required on each node.

    1. Back up the data on the source server.

    2. Install the same database version as the source server on the target server.

    3. Restore with a copy-back operation on the target server.

    4. Start the database server on the target server.

    5. Do a slow shutdown of the database server with the SET GLOBAL innodb_fast_shutdown=0 statement. This shutdown type flushes InnoDB operations before completing and may take longer.

    6. Install the new database server version on the target server.

    7. Start the new database server version on the restored data directory.

    8. Perform any other upgrade steps as necessary.

    To ensure the upgrade was successful, check the data.

    "},{"location":"upgrade-from-backup.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"upgrade-guide.html","title":"Upgrade Percona XtraDB Cluster","text":"

    The following documents contain details about relevant changes in the 8.0 series of MySQL and Percona Server for MySQL. Make sure you deal with any incompatible features and variables mentioned in these documents when upgrading to Percona XtraDB Cluster 8.0.

    "},{"location":"upgrade-guide.html#important-changes-in-percona-xtradb-cluster-80","title":"Important changes in Percona XtraDB Cluster 8.0","text":""},{"location":"upgrade-guide.html#traffic-encryption-is-enabled-by-default","title":"Traffic encryption is enabled by default","text":"

    The pxc_encrypt_cluster_traffic variable, which enables traffic encryption, is set to ON by default in Percona XtraDB Cluster 8.0.

    Unless you configure a node accordingly (each node in your cluster must use the same SSL certificates) or try to join a cluster running PXC 5.7 which unencrypted cluster traffic, the node will not be able to join resulting in an error.

    The error message
    ... [ERROR] ... [Galera] handshake with remote endpoint ...\nThis error is often caused by SSL issues. ...\n

    See also

    sections Encrypting PXC Traffic, Configuring Nodes for Write-Set Replication

    "},{"location":"upgrade-guide.html#not-recommended-to-mix-pxc-57-nodes-with-pxc-80-nodes","title":"Not recommended to mix PXC 5.7 nodes with PXC 8.0 nodes","text":"

    Shut down the cluster and upgrade each node to PXC 8.0. It is important that you make backups before attempting an upgrade.

    "},{"location":"upgrade-guide.html#pxc-strict-mode-is-enabled-by-default","title":"PXC strict mode is enabled by default","text":"

    Percona XtraDB Cluster in 8.0 runs with PXC Strict Mode enabled by default. This will deny any unsupported operations and may halt the server if a strict mode validation fails. It is recommended to first start the node with the pxc_strict_mode variable set to PERMISSIVE in the MySQL configuration file.

    All configuration settings are stored in the default MySQL configuration file:

    After you check the log for any tech preview features or unsupported features and you have fixed any of the encountered incompatibilities, set the variable back to ENFORCING at run time:

    mysql> SET pxc_strict_mode=ENFORCING;\n

    Restarting the node with the updated configuration file also sets variable to ENFORCING.

    "},{"location":"upgrade-guide.html#the-configuration-file-layout-has-changed-in-pxc-80","title":"The configuration file layout has changed in PXC 8.0","text":"

    All configuration settings are stored in the default MySQL configuration file:

    Before you start the upgrade, move your custom settings from /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf (on Debian and Ubuntu) or from /etc/percona-xtradb-cluster.conf.d/wsrep.cnf (on Red Hat and CentOS) to the new location accordingly.

    Note

    If you have moved your my.cnf file to a different location and added a symlink to /etc/my.cnf, the RPM package manager, when upgrading, can delete the symlink and put a default my.cnf file in /etc/.

    "},{"location":"upgrade-guide.html#caching_sha2_password-is-the-default-authentication-plugin","title":"caching_sha2_password is the default authentication plugin","text":"

    In Percona XtraDB Cluster 8.0, the default authentication plugin is caching_sha2_password. The ProxySQL option \u2013syncusers will not work if the Percona XtraDB Cluster user is created using caching_sha2_password. Use the mysql_native_password authentication plugin in these cases.

    Be sure you are running on the latest 5.7 version before you upgrade to 8.0.

    "},{"location":"upgrade-guide.html#mysql_upgrade-is-part-of-sst","title":"mysql_upgrade is part of SST","text":"

    mysql_upgrade is now run automatically as part of SST. You do not have to run it manually when upgrading your system from an older version.

    "},{"location":"upgrade-guide.html#major-upgrade-scenarios","title":"Major upgrade scenarios","text":"

    Upgrading PXC from 5.7 to 8.0 may have slightly different strategies depending on the configuration and workload on your PXC cluster.

    Note that the new default value of pxc-encrypt-cluster-traffic (set to ON versus OFF in PXC 5.7) requires additional care. You cannot join a 5.7 node to a PXC 8.0 cluster unless the node has traffic encryption enabled as the cluster may not have some nodes with traffic encryption enabled and some nodes with traffic encryption disabled. For more information, see Traffic encryption is enabled by default.

    "},{"location":"upgrade-guide.html#scenario-no-active-parallel-workload-or-with-read-only-workload","title":"Scenario: No active parallel workload or with read-only workload","text":"

    If there is no active parallel workload or the cluster has read-only workload while upgrading the nodes, complete the following procedure for each node in the cluster:

    1. Shutdown one of the node 5.7 cluster nodes.

    2. Remove 5.7 PXC packages without removing the data-directory.

    3. Install PXC 8.0 packages.

    4. Restart the mysqld service.

    Important

    Before upgrading, make sure your application can work with a reduced cluster size. If the cluster operates with an even number of nodes, the cluster may have split-brain.

    This upgrade flow auto-detects the presence of the 5.7 data directory and trigger the upgrade as part of the node bootup process. The data directory is upgraded to be compatible with PXC 8.0. Then the node joins the cluster and enters synced state. The 3-node cluster is restored with 2 nodes running PXC 5.7 and 1 node running PXC 8.0.

    Note

    Since SST is not involved, SST based auto-upgrade flow is not started.

    PXC 8.0 uses Galera 4 while PXC 5.7 uses Galera-3. The cluster will continue to use the protocol version 3 used in Galera 3 effectively limiting some of the functionality. With all nodes upgraded to version 8.0, protocol version 4 is applied.

    Tip

    The protocol version is stored in the protocol_version column of the wsrep_cluster table.

    mysql> USE mysql;\n
    mysql> SELECT protocol_version from wsrep_cluster;\n

    The example of the output is the following:

    +------------------+\n| protocol_version |\n+------------------+\n|                4 |\n+------------------+\n1 row in set (0.00 sec)\n

    As soon as the last 5.7 node shuts down, the configuration of the remaining two nodes is updated to use protocol version 4. A new upgraded node will then join using protocol version 4 and the whole cluster will maintain protocol version 4 enabling the support for additional Galera 4 facilities.

    It may take longer to join the last upgraded node since it will invite IST to obtain the configuration changes.

    Note

    Starting from Galera 4, the configuration changes are cached to gcache and the configuration changes are donated as part of IST or SST to help build the certification queue on the JOINING node. As other nodes (say n2 and n3), already using protocol version 4, donate the configuration changes when the JOINER node is booted.

    The situation was different for the previous and penultimate nodes since the donation of the configuration changes is not supported by protocol version 3 that they used.

    With IST involved on joining the last node, the smart IST flow is triggered to take care of the upgrade even before MySQL starts to look at the data directory.

    Important

    It is not recommended to restart the last node without upgrading it.

    "},{"location":"upgrade-guide.html#scenario-upgrade-from-pxc-56-to-pxc-80","title":"Scenario: Upgrade from PXC 5.6 to PXC 8.0","text":"

    First, upgrade PXC from 5.6 to the latest version of PXC 5.7. Then proceed with the upgrade using the procedure described in Scenario: No active parallel workload or with read-only workload.

    "},{"location":"upgrade-guide.html#minor-upgrade","title":"Minor upgrade","text":"

    To upgrade the cluster, follow these steps for each node:

    1. Make sure that all nodes are synchronized.

    2. Stop the mysql service:

      $ sudo service mysql stop\n
    3. Upgrade Percona XtraDB Cluster and Percona XtraBackup packages. For more information, see Installing Percona XtraDB Cluster.

    4. Back up grastate.dat, so that you can restore it if it is corrupted or zeroed out due to network issue.

    5. Now, start the cluster node with 8.0 packages installed, PXC will upgrade the data directory as needed - either as part of the startup process or a state transfer (IST/SST).

      In most cases, starting the mysql service should run the node with your previous configuration. For more information, see Adding Nodes to Cluster.

      $ sudo service mysql start\n

      Note

      On CentOS, the /etc/my.cnf configuration file is renamed to my.cnf.rpmsave. Make sure to rename it back before joining the upgraded node back to the cluster.

      PXC Strict Mode is enabled by default, which may result in denying any unsupported operations and may halt the server. For more information, see pxc-strict-mode is enabled by default.

      pxc-encrypt-cluster-traffic is enabled by default. You need to configure each node accordingly and avoid joining a cluster with unencrypted cluster traffic. For more information, see Traffic encryption is enabled by default.

    6. Repeat this procedure for the next node in the cluster until you upgrade all nodes.

    "},{"location":"upgrade-guide.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"verify-replication.html","title":"Verify replication","text":"

    Use the following procedure to verify replication by creating a new database on the second node, creating a table for that database on the third node, and adding some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@pxc2> CREATE DATABASE percona;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Switch to a newly created database:

      mysql@pxc3> USE percona;\n

      The following output confirms that a database has been changed:

      Expected output
      Database changed\n
    3. Create a table on the third node:

      mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));\n

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)\n
    4. Insert records on the first node:

      mysql@pxc1> INSERT INTO percona.example VALUES (1, 'percona1');\n

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)\n
    5. Retrieve rows from that table on the second node:

      mysql@pxc2> SELECT * FROM percona.example;\n

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+\n| node_id | node_name |\n+---------+-----------+\n|       1 | percona1  |\n+---------+-----------+\n1 row in set (0.00 sec)\n
    "},{"location":"verify-replication.html#next-steps","title":"Next steps","text":""},{"location":"verify-replication.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"virtual-sandbox.html","title":"Set up a testing environment with ProxySQL","text":"

    This section describes how to set up Percona XtraDB Cluster in a virtualized testing environment based on ProxySQL. To test the cluster, we will use the sysbench benchmark tool.

    It is assumed that each PXC node is installed on Amazon EC2 micro instances running CentOS 7. However, the information in this section should apply if you used another virtualization technology (for example, VirtualBox) with any Linux distribution.

    Each of the tree Percona XtraDB Cluster nodes is installed on a separate virtual machine. One more virtual machine has ProxySQL, which redirects requests to the nodes.

    Tip

    Running ProxySQL on an application server, instead of having it as a dedicated entity, removes the unnecessary extra network roundtrip, because the load balancing layer in Percona XtraDB Cluster scales well with application servers.

    1. Install Percona XtraDB Cluster on three cluster nodes, as described in Configuring Percona XtraDB Cluster on CentOS.

    2. On the client node, install ProxySQL and sysbench:

      $ yum -y install proxysql2 sysbench\n
    3. When all cluster nodes are started, configure ProxySQL using the admin interface.

      Tip

      To connect to the ProxySQL admin interface, you need a mysql client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally.

      To connect to the admin interface, use the credentials, host name and port specified in the global variables.

      Warning

      Do not use default credentials in production!

      The following example shows how to connect to the ProxySQL admin interface with default credentials (assuming that ProxySQL IP is 192.168.70.74):

      root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032\n
      Expected output
      Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 2\nServer version: 5.5.30 (ProxySQL Admin Module)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n

      To see the ProxySQL databases and tables use the SHOW DATABASES and SHOW TABLES commands:

      mysql> SHOW DATABASES;\n

      The following output shows the list of the ProxySQL databases:

      Expected output
      +-----+---------------+-------------------------------------+\n| seq | name          | file                                |\n+-----+---------------+-------------------------------------+\n| 0   | main          |                                     |\n| 2   | disk          | /var/lib/proxysql/proxysql.db       |\n| 3   | stats         |                                     |\n| 4   | monitor       |                                     |\n| 5   | stats_monitor | /var/lib/proxysql/proxysql_stats.db |\n+-----+---------------+-------------------------------------+\n5 rows in set (0.00 sec)\n
      mysql> SHOW TABLES;\n

      The following output shows the list of tables:

      Expected output
      +----------------------------------------------------+\n| tables                                             |\n+----------------------------------------------------+\n| global_variables                                   |\n| mysql_aws_aurora_hostgroups                        |\n| mysql_collations                                   |\n| mysql_firewall_whitelist_rules                     |\n| mysql_firewall_whitelist_sqli_fingerprints         |\n| mysql_firewall_whitelist_users                     |\n| mysql_galera_hostgroups                            |\n| mysql_group_replication_hostgroups                 |\n| mysql_query_rules                                  |\n| mysql_query_rules_fast_routing                     |\n| mysql_replication_hostgroups                       |\n| mysql_servers                                      |\n| mysql_users                                        |\n| proxysql_servers                                   |\n| restapi_routes                                     |\n| runtime_checksums_values                           |\n| runtime_global_variables                           |\n| runtime_mysql_aws_aurora_hostgroups                |\n| runtime_mysql_firewall_whitelist_rules             |\n| runtime_mysql_firewall_whitelist_sqli_fingerprints |\n| runtime_mysql_firewall_whitelist_users             |\n| runtime_mysql_galera_hostgroups                    |\n| runtime_mysql_group_replication_hostgroups         |\n| runtime_mysql_query_rules                          |\n| runtime_mysql_query_rules_fast_routing             |\n| runtime_mysql_replication_hostgroups               |\n| runtime_mysql_servers                              |\n| runtime_mysql_users                                |\n| runtime_proxysql_servers                           |\n| runtime_restapi_routes                             |\n| runtime_scheduler                                  |\n| scheduler                                          |\n+----------------------------------------------------+\n32 rows in set (0.00 sec)\n

      For more information about admin databases and tables, see Admin Tables

      Note

      ProxySQL has 3 areas where the configuration can reside:

      • MEMORY (your current working place)

      • RUNTIME (the production settings)

      • DISK (durable configuration, saved inside an SQLITE database)

      When you change a parameter, you change it in MEMORY area. That is done by design to allow you to test the changes before pushing to production (RUNTIME), or saving them to disk.

    "},{"location":"virtual-sandbox.html#adding-cluster-nodes-to-proxysql","title":"Adding cluster nodes to ProxySQL","text":"

    To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.

    INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.71',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.72',10,3306,1000);\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.73',10,3306,1000);\n

    ProxySQL v2.0 supports PXC natlively. It uses the concept of hostgroups (see the value of hostgroup_id in the mysql_servers table) to group cluster nodes to balance the load in a cluster by routing different types of traffic to different groups.

    This information is stored in the [runtime_]mysql_galera_hostgroups table.

    Columns of the [runtime_]mysql_galera_hostgroups table

    Column name Description writer_hostgroup: The ID of the hostgroup that refers to the WRITER node backup_writer_hostgroup The ID of the hostgroup that contains candidate WRITER servers reader_hostgroup The ID of the hostgroup that contains candidate READER servers offline_hostgroup The ID of the hostgroup that will eventually contain the WRITER node that will be put OFFLINE active 1 (Yes) to inidicate that this configuration should be used; 0 (No) - otherwise max_writers The maximum number of WRITER nodes that must operate simultaneously. For most cases, a reasonable value is 1. The value in this column may not exceed the total number of nodes. writer_is_also_reader 1 (Yes) to keep the given node in both reader_hostgroup and writer_hostgroup. 0 (No) to remove the given node from reader_hostgroup if it already belongs to writer_hostgroup. max_transactions_behind As soon as the value of :variable:wsrep_local_recv_queue exceeds the number stored in this column the given node is set to OFFLINE. Set the value carefully based on the behaviour of the node. comment Helpful extra information about the given node

    Make sure that the variable mysql-server_version refers to the correct version. For Percona XtraDB Cluster 8.0, set it to 8.0 accordingly:

    mysql> UPDATE GLOBAL_VARIABLES\nSET variable_value='8.0'\nWHERE variable_name='mysql-server_version';\n\nmysql> LOAD MYSQL SERVERS TO RUNTIME;\nmysql> SAVE MYSQL SERVERS TO DISK;\n

    See also

    Percona Blogpost: ProxySQL Native Support for Percona XtraDB Cluster (PXC) https://www.percona.com/blog/2019/02/20/proxysql-native-support-for-percona-xtradb-cluster-pxc/

    Given the nodes from the mysql_servers table, you may set up the hostgroups as follows:

    mysql> INSERT INTO mysql_galera_hostgroups (\nwriter_hostgroup, backup_writer_hostgroup, reader_hostgroup,\noffline_hostgroup, active, max_writers, writer_is_also_reader,\nmax_transactions_behind)\nVALUES (10, 12, 11, 13, 1, 1, 2, 100);\n

    This command configures ProxySQL as follows:

    WRITER hostgroup

    hostgroup `10`\n

    READER hostgroup

    hostgroup `11`\n

    BACKUP WRITER hostgroup

    hostgroup `12`\n

    OFFLINE hostgroup

    hostgroup `13`\n

    Set up ProxySQL query rules for read/write split using the mysql_query_rules table:

    mysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',10,1,'^SELECT.*FOR UPDATE',1);\n\nmysql> INSERT INTO mysql_query_rules (\nusername,destination_hostgroup,active,match_digest,apply)\nVALUES ('appuser',11,1,'^SELECT ',1);\n\nmysql> LOAD MYSQL QUERY RULES TO RUNTIME;\nmysql> SAVE MYSQL QUERY RULES TO DISK;\n\nmysql> select hostgroup_id,hostname,port,status,weight from runtime_mysql_servers;\n
    Expected output
    +--------------+----------------+------+--------+--------+\n| hostgroup_id | hostname       | port | status | weight |\n+--------------+----------------+------+--------+--------+\n| 10           | 192.168.70.73 | 3306  | ONLINE | 1000   |\n| 11           | 192.168.70.72 | 3306  | ONLINE | 1000   |\n| 11           | 192.168.70.71 | 3306  | ONLINE | 1000   |\n| 12           | 192.168.70.72 | 3306  | ONLINE | 1000   |\n| 12           | 192.168.70.71 | 3306  | ONLINE | 1000   |\n+--------------+----------------+------+--------+--------+\n5 rows in set (0.00 sec)\n

    See also

    ProxySQL Blog: MySQL read/write split with ProxySQL https://proxysql.com/blog/configure-read-write-split/ ProxySQL Documentation: mysql_query_rules table https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules

    "},{"location":"virtual-sandbox.html#proxysql-failover-behavior","title":"ProxySQL failover behavior","text":"

    Notice that all servers were inserted into the mysql_servers table with the READER hostgroup set to 10 (see the value of the hostgroup_id column):

    mysql> SELECT * FROM mysql_servers;\n
    Expected output
    +--------------+---------------+------+--------+     +---------+\n| hostgroup_id | hostname      | port | weight | ... | comment |\n+--------------+---------------+------+--------+     +---------+\n| 10           | 192.168.70.71 | 3306 | 1000   |     |         |\n| 10           | 192.168.70.72 | 3306 | 1000   |     |         |\n| 10           | 192.168.70.73 | 3306 | 1000   |     |         |\n+--------------+---------------+------+--------+     +---------+\n3 rows in set (0.00 sec)\n

    This configuration implies that ProxySQL elects the writer automatically. If the elected writer goes offline, ProxySQL assigns another (failover). You might tweak this mechanism by assigning a higher weight to a selected node. ProxySQL directs all write requests to this node. However, it also becomes the mostly utilized node for reading requests. In case of a failback (a node is put back online), the node with the highest weight is automatically elected for write requests.

    "},{"location":"virtual-sandbox.html#creating-a-proxysql-monitoring-user","title":"Creating a ProxySQL monitoring user","text":"

    To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE privilege on any node in the cluster and configure the user in ProxySQL.

    The following example shows how to add a monitoring user on Node 2:

    mysql> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password BY 'ProxySQLPa55';\nmysql> GRANT USAGE ON *.* TO 'proxysql'@'%';\n

    The following example shows how to configure this user on the ProxySQL node:

    mysql> UPDATE global_variables SET variable_value='proxysql'\nWHERE variable_name='mysql-monitor_username';\n\nmysql> UPDATE global_variables SET variable_value='ProxySQLPa55'\nWHERE variable_name='mysql-monitor_password';\n
    "},{"location":"virtual-sandbox.html#saving-and-loading-the-configuration","title":"Saving and loading the configuration","text":"

    To load this configuration at runtime, issue the LOAD command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue the SAVE command.

    mysql> LOAD MYSQL VARIABLES TO RUNTIME;\nmysql> SAVE MYSQL VARIABLES TO DISK;\n

    To ensure that monitoring is enabled, check the monitoring logs:

    mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+----------------------+---------------+\n| hostname      | port | time_start_us    | connect_success_time | connect_error |\n+---------------+------+------------------+----------------------+---------------+\n| 192.168.70.71 | 3306 | 1469635762434625 | 1695                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635762434625 | 1779                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635762434625 | 1627                 | NULL          |\n| 192.168.70.71 | 3306 | 1469635642434517 | 1557                 | NULL          |\n| 192.168.70.72 | 3306 | 1469635642434517 | 2737                 | NULL          |\n| 192.168.70.73 | 3306 | 1469635642434517 | 1447                 | NULL          |\n+---------------+------+------------------+----------------------+---------------+\n6 rows in set (0.00 sec)\n
    mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;\n
    Expected output
    +---------------+------+------------------+-------------------+------------+\n| hostname      | port | time_start_us    | ping_success_time | ping_error |\n+---------------+------+------------------+-------------------+------------+\n| 192.168.70.71 | 3306 | 1469635762416190 | 948               | NULL       |\n| 192.168.70.72 | 3306 | 1469635762416190 | 803               | NULL       |\n| 192.168.70.73 | 3306 | 1469635762416190 | 711               | NULL       |\n| 192.168.70.71 | 3306 | 1469635702416062 | 783               | NULL       |\n| 192.168.70.72 | 3306 | 1469635702416062 | 631               | NULL       |\n| 192.168.70.73 | 3306 | 1469635702416062 | 542               | NULL       |\n+---------------+------+------------------+-------------------+------------+\n6 rows in set (0.00 sec)\n

    The previous examples show that ProxySQL is able to connect and ping the nodes you added.

    To enable monitoring of these nodes, load them at runtime:

    mysql> LOAD MYSQL SERVERS TO RUNTIME;\n
    "},{"location":"virtual-sandbox.html#creating-proxysql-client-user","title":"Creating ProxySQL Client User","text":"

    ProxySQL must have users that can access backend nodes to manage connections.

    To add a user, insert credentials into mysql_users table:

    mysql> INSERT INTO mysql_users (username,password) VALUES ('appuser','$3kRetp@$sW0rd');\n

    The example of the output is the following:

    Expected output
    Query OK, 1 row affected (0.00 sec)\n

    Note

    ProxySQL currently doesn\u2019t encrypt passwords.

    See also

    More information about password encryption in ProxySQL

    Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):

    mysql> LOAD MYSQL USERS TO RUNTIME;\nmysql> SAVE MYSQL USERS TO DISK;\n

    To confirm that the user has been set up correctly, you can try to log in:

    root@proxysql:~# mysql -u appuser -p$3kRetp@$sW0rd -h 127.0.0.1 -P 6033\n
    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 1491\nServer version: 5.5.30 (ProxySQL)\n\nCopyright (c) 2009-2020 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n

    To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

    mysql> CREATE USER 'appuser'@'192.168.70.74'\nIDENTIFIED WITH mysql_native_password by '$3kRetp@$sW0rd';\n\nmysql> GRANT ALL ON *.* TO 'appuser'@'192.168.70.74';\n
    "},{"location":"virtual-sandbox.html#testing-the-cluster-with-the-sysbench-benchmark-tool","title":"Testing the cluster with the sysbench benchmark tool","text":"

    After you set up Percona XtraDB Cluster in your testing environment, you can test it using the sysbench benchmarking tool.

    1. Create a database (sysbenchdb in this example; you can use a different name):

      mysql> CREATE DATABASE sysbenchdb;\n

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)\n
    2. Populate the table with data for the benchmark. Note that you should pass the database you have created as the value of the --mysql-db parameter, and the name of the user who has full access to this database as the value of the --mysql-user parameter:

      $ sysbench /usr/share/sysbench/oltp_insert.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--table-size=1000 prepare\n
    3. Run the benchmark on port 6033:

      $ sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sysbenchdb \\\n--mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user=appuser \\\n--mysql-password=$3kRetp@$sW0rd --db-driver=mysql --threads=10 --tables=10 \\\n--skip-trx=true --table-size=1000 --time=100 --report-interval=10 run\n

    Related sections and additional reading

    "},{"location":"virtual-sandbox.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-files-index.html","title":"Index of files created by PXC","text":"

    This file is used for Primary Component recovery feature. This file is created once primary component is formed or changed, so you can get the latest primary component this node was in. And this file is deleted when the node is shutdown gracefully.

    First part contains the node UUID information. Second part contains the view information. View information is written between #vwbeg and #vwend. View information consists of:

    * view_id: [view_type] [view_uuid] [view_seq]. - `view_type` is always `3` which means primary view. `view_uuid` and `view_seq` identifies a unique view, which could be perceived as identifier of this primary component.\n\n* bootstrap: [bootstarp_or_not]. - it could be `0` or `1`, but it does not affect primary component recovery process now.\n\n* member: [node\u2019s uuid] [node\u2019s segment]. - it represents all nodes in this primary component.\n\n??? example \"Example of the file\"\n\n    ```{.text .no-copy}\n    my_uuid: c5d5d990-30ee-11e4-aab1-46d0ed84b408\n    #vwbeg\n    view_id: 3 bc85bd53-31ac-11e4-9895-1f2ce13f2542 2 \n    bootstrap: 0\n    member: bc85bd53-31ac-11e4-9895-1f2ce13f2542 0\n    member: c5d5d990-30ee-11e4-aab1-46d0ed84b408 0\n    #vwend\n    ```\n
    "},{"location":"wsrep-files-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-provider-index.html","title":"Index of wsrep_provider options","text":"

    The following variables can be set and checked in the wsrep_provider_options variable. The value of the variable can be changed in the MySQL configuration file, my.cnf, or by setting the variable value in the MySQL client.

    To change the value in my.cnf, the following syntax should be used:

    $ wsrep_provider_options=\"variable1=value1;[variable2=value2]\"\n

    For example to set the size of the Galera buffer storage to 512 MB, specify the following in my.cnf:

    $ wsrep_provider_options=\"gcache.size=512M\"\n

    Dynamic variables can be changed from the MySQL client using the SET GLOBAL command. For example, to change the value of the pc.ignore_sb, use the following command:

    mysql> SET GLOBAL wsrep_provider_options=\"pc.ignore_sb=true\";\n
    "},{"location":"wsrep-provider-index.html#index","title":"Index","text":""},{"location":"wsrep-provider-index.html#base_dir","title":"base_dir","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of datadir

    This variable specifies the data directory.

    "},{"location":"wsrep-provider-index.html#base_host","title":"base_host","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address

    This variable sets the value of the node\u2019s base IP. This is an IP address on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.

    "},{"location":"wsrep-provider-index.html#base_port","title":"base_port","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 4567

    This variable sets the port on which Galera listens for connections from other nodes. Setting this value incorrectly would stop the node from communicating with other nodes.

    "},{"location":"wsrep-provider-index.html#certlog_conflicts","title":"cert.log_conflicts","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: no

    This variable is used to specify if the details of the certification failures should be logged.

    "},{"location":"wsrep-provider-index.html#certoptimistic_pa","title":"cert.optimistic_pa","text":"

    Enabled

    Allows the full range of parallelization as determined by the certification\nalgorithm.\n

    Disabled

    Limits the parallel applying window so that it does not exceed the parallel\napplying window seen on the source. In this case, the action starts applying\nno sooner than all actions on the source are committed.\n
    Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: No

    See also

    Galera Cluster Documentation: * Parameter: cert.optimistic_pa * Setting parallel slave threads

    "},{"location":"wsrep-provider-index.html#debug","title":"debug","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: no

    When this variable is set to yes, it will enable debugging.

    "},{"location":"wsrep-provider-index.html#evsauto_evict","title":"evs.auto_evict","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0

    Number of entries allowed on delayed list until auto eviction takes place. Setting value to 0 disables auto eviction protocol on the node, though node response times will still be monitored. EVS protocol version (evs.version) 1 is required to enable auto eviction.

    "},{"location":"wsrep-provider-index.html#evscausal_keepalive_period","title":"evs.causal_keepalive_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of evs.keepalive_period

    This variable is used for development purposes and shouldn\u2019t be used by regular users.

    "},{"location":"wsrep-provider-index.html#evsdebug_log_mask","title":"evs.debug_log_mask","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 0x1

    This variable is used for EVS (Extended Virtual Synchrony) debugging. It can be used only when wsrep_debug is set to ON.

    "},{"location":"wsrep-provider-index.html#evsdelay_margin","title":"evs.delay_margin","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT1S

    Time period that a node can delay its response from expected until it is added to delayed list. The value must be higher than the highest RTT between nodes.

    "},{"location":"wsrep-provider-index.html#evsdelayed_keep_period","title":"evs.delayed_keep_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S

    Time period that node is required to remain responsive until one entry is removed from delayed list.

    "},{"location":"wsrep-provider-index.html#evsevict","title":"evs.evict","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes

    Manual eviction can be triggered by setting the evs.evict to a certain node value. Setting the evs.evict to an empty string will clear the evict list on the node where it was set.

    "},{"location":"wsrep-provider-index.html#evsinactive_check_period","title":"evs.inactive_check_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT0.5S

    This variable defines how often to check for peer inactivity.

    "},{"location":"wsrep-provider-index.html#evsinactive_timeout","title":"evs.inactive_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT15S

    This variable defines the inactivity limit, once this limit is reached the node will be considered dead.

    "},{"location":"wsrep-provider-index.html#evsinfo_log_mask","title":"evs.info_log_mask","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable is used for controlling the extra EVS info logging.

    "},{"location":"wsrep-provider-index.html#evsinstall_timeout","title":"evs.install_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT7.5S

    This variable defines the timeout on waiting for install message acknowledgments.

    "},{"location":"wsrep-provider-index.html#evsjoin_retrans_period","title":"evs.join_retrans_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S

    This variable defines how often to retransmit EVS join messages when forming cluster membership.

    "},{"location":"wsrep-provider-index.html#evskeepalive_period","title":"evs.keepalive_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1S

    This variable defines how often to emit keepalive beacons (in the absence of any other traffic).

    "},{"location":"wsrep-provider-index.html#evsmax_install_timeouts","title":"evs.max_install_timeouts","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1

    This variable defines how many membership install rounds to try before giving up (total rounds will be evs.max_install_timeouts + 2).

    "},{"location":"wsrep-provider-index.html#evssend_window","title":"evs.send_window","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 10

    This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example,512). The value must not be less than evs.user_send_window.

    "},{"location":"wsrep-provider-index.html#evsstats_report_period","title":"evs.stats_report_period","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT1M

    This variable defines the control period of EVS statistics reporting.

    "},{"location":"wsrep-provider-index.html#evssuspect_timeout","title":"evs.suspect_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S

    This variable defines the inactivity period after which the node is \u201csuspected\u201d to be dead. If all remaining nodes agree on that, the node will be dropped out of cluster even before evs.inactive_timeout is reached.

    "},{"location":"wsrep-provider-index.html#evsuse_aggregate","title":"evs.use_aggregate","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    When this variable is enabled, smaller packets will be aggregated into one.

    "},{"location":"wsrep-provider-index.html#evsuser_send_window","title":"evs.user_send_window","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 4

    This variable defines the maximum number of data packets in replication at a time. For WAN setups, the variable can be set to a considerably higher value than default (for example, 512).

    "},{"location":"wsrep-provider-index.html#evsversion","title":"evs.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable defines the EVS protocol version. Auto eviction is enabled when this variable is set to 1. Default 0 is set for backwards compatibility.

    "},{"location":"wsrep-provider-index.html#evsview_forget_timeout","title":"evs.view_forget_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: P1D

    This variable defines the timeout after which past views will be dropped from history.

    "},{"location":"wsrep-provider-index.html#gcachedir","title":"gcache.dir","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: datadir

    This variable can be used to define the location of the galera.cache file.

    "},{"location":"wsrep-provider-index.html#gcachefreeze_purge_at_seqno","title":"gcache.freeze_purge_at_seqno","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0

    This variable controls the purging of the gcache and enables retaining more data in it. This variable makes it possible to use IST (Incremental State Transfer) when the node rejoins instead of SST (State Snapshot Transfer).

    Set this variable on an existing node of the cluster (that will continue to be part of the cluster and can act as a potential donor node). This node continues to retain the write-sets and allows restarting the node to rejoin by using IST.

    See also

    Percona Database Performance Blog:

    The gcache.freeze_purge_at_seqno variable takes three values:

    -1 (default)

    No freezing of gcache, the purge operates as normal.

    A valid seqno in gcache

    The freeze purge of write-sets may not be smaller than the selected seqno. The best way to select an optimal value is to use the value of the variable :variable:wsrep_last_applied from the node that you plan to shut down.

    now The freeze purge of write-sets is no less than the smallest seqno currently in gcache. Using this value results in freezing the gcache-purge instantly. Use this value if selecting a valid seqno in gcache is difficult.

    "},{"location":"wsrep-provider-index.html#gcachekeep_pages_count","title":"gcache.keep_pages_count","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: Yes Default Value: 0

    This variable is used to limit the number of overflow pages rather than the total memory occupied by all overflow pages. Whenever gcache.keep_pages_count is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest).

    Whenever either the gcache.keep_pages_count or the gcache.keep_pages_size variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.

    "},{"location":"wsrep-provider-index.html#gcachekeep_pages_size","title":"gcache.keep_pages_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Local, Global Dynamic: No Default Value: 0

    This variable is used to limit the total size of overflow pages rather than the count of all overflow pages. Whenever gcache.keep_pages_size is set to a non-zero value, excess overflow pages will be deleted (starting from the oldest to the newest) until the total size is below the specified value.

    Whenever either the gcache.keep_pages_count or the gcache.keep_pages_size variable is updated at runtime to a non-zero value, cleanup is called on excess overflow pages to delete them.

    "},{"location":"wsrep-provider-index.html#gcachemem_size","title":"gcache.mem_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable has been deprecated in 5.6.22-25.8 and shouldn\u2019t be used as it could cause a node to crash.

    This variable was used to define how much RAM is available for the system.

    "},{"location":"wsrep-provider-index.html#gcachename","title":"gcache.name","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql/galera.cache

    This variable can be used to specify the name of the Galera cache file.

    "},{"location":"wsrep-provider-index.html#gcachepage_size","title":"gcache.page_size","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: 128M

    Size of the page files in page storage. The limit on overall page storage is the size of the disk. Pages are prefixed by gcache.page.

    See also

    "},{"location":"wsrep-provider-index.html#gcacherecover","title":"gcache.recover","text":"Option Description Command line: No Configuration file: Yes Scope: Global Dynamic: No Default value: No

    Attempts to recover a node\u2019s gcache file to a usable state on startup. If the node can successfully recover the gcache file, the node can provide IST to the remaining nodes. This ability can reduce the time needed to bring up the cluster.

    An example of enabling the variable in the configuration file:

    wsrep_provider_options=\"gcache.recover=yes\"\n
    "},{"location":"wsrep-provider-index.html#gcachesize","title":"gcache.size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 128M

    Size of the transaction cache for Galera replication. This defines the size of the galera.cache file which is used as source for IST. The bigger the value of this variable, the better are chances that the re-joining node will get IST instead of SST.

    "},{"location":"wsrep-provider-index.html#gcommthread_prio","title":"gcomm.thread_prio","text":"

    Using this option, you can raise the priority of the gcomm thread to a higher level than it normally uses.

    The format for this variable is: <policy>:<priority>. The priority value is an integer.

    other

    Default time-sharing scheduling in Linux. The threads can run\nuntil blocked by an I/O request or preempted by higher priorities or\nsuperior scheduling designations.\n

    fifo

    First-in First-out (FIFO) scheduling. These threads always immediately\npreempt any currently running other, batch or idle threads. They can run\nuntil they are either blocked by an I/O request or preempted by a FIFO thread\nof a higher priority.\n

    rr

    Round-robin scheduling. These threads always preempt any currently running\nother, batch or idle threads. The scheduler allows these threads to run for a\nfixed period of a time. If the thread is still running when this time period is\nexceeded, they are stopped and moved to the end of the list, allowing another\nround-robin thread of the same priority to run in their place. They can\notherwise continue to run until they are blocked by an I/O request or are\npreempted by threads of a higher priority.\n

    See also

    For information, see the Galera Cluster documentation

    "},{"location":"wsrep-provider-index.html#gcsfc_auto_evict_threshold","title":"gcs.fc_auto_evict_threshold","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0.75

    Implemented in Percona XtraDB Cluster 8.0.33-25.

    Defines the threshold that must be reached or crossed before a node is evicted from the cluster. This variable is a ratio of the gcs.fc_auto_evict_window variable. The default value is .075, but the value can be set to any value between 0.0 and 1.0.

    "},{"location":"wsrep-provider-index.html#gcsfc_auto_evict_window","title":"gcs.fc_auto_evict_window","text":"Option Description Command Line: Yes Config file : Yes Scope: Global Dynamic: No Default value: 0

    Implemented in Percona XtraDB Cluster 8.0.33-25.

    The variable defines the time window width within which flow controls are observed. The time span of the window is [now - gcs.fc_audot_evict_window, now]. The window is constantly moving ahead as the time passes. And now, within this window if the flow control summary time >= (gcs.fc_audot-evict_window * gcs.fc_audot_evict_threshold), the node self-leaves the cluster.

    The default value is 0, which means that the feature is disabled.

    The maximum value is DBL_MAX.

    "},{"location":"wsrep-provider-index.html#gcsfc_debug","title":"gcs.fc_debug","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable specifies after how many writesets the debug statistics about SST flow control will be posted.

    "},{"location":"wsrep-provider-index.html#gcsfc_factor","title":"gcs.fc_factor","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    This variable is used for replication flow control. Replication is resumed when the replica queue drops below gcs.fc_factor * gcs.fc_limit.

    "},{"location":"wsrep-provider-index.html#gcsfc_limit","title":"gcs.fc_limit","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 100

    This variable is used for replication flow control. Replication is paused when the replica queue exceeds this limit. In the default operation mode, flow control limit is dynamically recalculated based on the amount of nodes in the cluster, but this recalculation can be turned off with use of the gcs.fc_master_slave variable to make manual setting of the gcs.fc_limit having an effect (e.g., for configurations when writing is done to a single node in Percona XtraDB Cluster).

    "},{"location":"wsrep-provider-index.html#gcsfc_master_slave","title":"gcs.fc_master_slave","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: NO Default Value: NO

    This variable is used to specify if there is only one source node in the cluster. It affects whether flow control limit is recalculated dynamically (when NO) or not (when YES).

    "},{"location":"wsrep-provider-index.html#gcsmax_packet_size","title":"gcs.max_packet_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 64500

    This variable is used to specify the writeset size after which they will be fragmented.

    "},{"location":"wsrep-provider-index.html#gcsmax_throttle","title":"gcs.max_throttle","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25

    This variable specifies how much the replication can be throttled during the state transfer in order to avoid running out of memory. Value can be set to 0.0 if stopping replication is acceptable in order to finish state transfer.

    "},{"location":"wsrep-provider-index.html#gcsrecv_q_hard_limit","title":"gcs.recv_q_hard_limit","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 9223372036854775807

    This variable specifies the maximum allowed size of the receive queue. This should normally be (RAM + swap) / 2. If this limit is exceeded, Galera will abort the server.

    "},{"location":"wsrep-provider-index.html#gcsrecv_q_soft_limit","title":"gcs.recv_q_soft_limit","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0.25

    This variable specifies the fraction of the gcs.recv_q_hard_limit after which replication rate will be throttled.

    "},{"location":"wsrep-provider-index.html#gcssync_donor","title":"gcs.sync_donor","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No

    This variable controls if the rest of the cluster should be in sync with the donor node. When this variable is set to YES, the whole cluster will be blocked if the donor node is blocked with SST.

    "},{"location":"wsrep-provider-index.html#gmcastlisten_addr","title":"gmcast.listen_addr","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: tcp://0.0.0.0:4567

    This variable defines the address on which the node listens to connections from other nodes in the cluster.

    "},{"location":"wsrep-provider-index.html#gmcastmcast_addr","title":"gmcast.mcast_addr","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: None

    This variable should be set up if UDP multicast should be used for replication.

    "},{"location":"wsrep-provider-index.html#gmcastmcast_ttl","title":"gmcast.mcast_ttl","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 1

    This variable can be used to define TTL for multicast packets.

    "},{"location":"wsrep-provider-index.html#gmcastpeer_timeout","title":"gmcast.peer_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S

    This variable specifies the connection timeout to initiate message relaying.

    "},{"location":"wsrep-provider-index.html#gmcastsegment","title":"gmcast.segment","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable specifies the group segment this member should be a part of. Same segment members are treated as equally physically close.

    "},{"location":"wsrep-provider-index.html#gmcasttime_wait","title":"gmcast.time_wait","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT5S

    This variable specifies the time to wait until allowing peer declared outside of stable view to reconnect.

    "},{"location":"wsrep-provider-index.html#gmcastversion","title":"gmcast.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This variable shows which gmcast protocol version is being used.

    "},{"location":"wsrep-provider-index.html#istrecv_addr","title":"ist.recv_addr","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: value of wsrep_node_address

    This variable specifies the address on which the node listens for Incremental State Transfer (IST).

    "},{"location":"wsrep-provider-index.html#pcannounce_timeout","title":"pc.announce_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT3S

    Cluster joining announcements are sent every \u00bd second for this period of time or less if other nodes are discovered.

    "},{"location":"wsrep-provider-index.html#pcchecksum","title":"pc.checksum","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    This variable controls whether replicated messages should be checksummed or not.

    "},{"location":"wsrep-provider-index.html#pcignore_quorum","title":"pc.ignore_quorum","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false

    When this variable is set to TRUE, the node will completely ignore quorum calculations. This should be used with extreme caution even in source-replica setups, because replicas won\u2019t automatically reconnect to source in this case.

    "},{"location":"wsrep-provider-index.html#pcignore_sb","title":"pc.ignore_sb","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: false

    When this variable is set to TRUE, the node will process updates even in the case of a split brain. This should be used with extreme caution in multi-source setup, but should simplify things in source-replica cluster (especially if only 2 nodes are used).

    "},{"location":"wsrep-provider-index.html#pclinger","title":"pc.linger","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT20S

    This variable specifies the period for which the PC protocol waits for EVS termination.

    "},{"location":"wsrep-provider-index.html#pcnpvo","title":"pc.npvo","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: false

    When this variable is set to TRUE, more recent primary components override older ones in case of conflicting primaries.

    "},{"location":"wsrep-provider-index.html#pcrecovery","title":"pc.recovery","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    When this variable is set to true, the node stores the Primary Component state to disk. The Primary Component can then recover automatically when all nodes that were part of the last saved state re-establish communication with each other. This feature allows automatic recovery from full cluster crashes, such as in the case of a data center power outage. A subsequent graceful full cluster restart will require explicit bootstrapping for a new Primary Component.

    "},{"location":"wsrep-provider-index.html#pcversion","title":"pc.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This status variable is used to check which PC protocol version is used.

    "},{"location":"wsrep-provider-index.html#pcwait_prim","title":"pc.wait_prim","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: true

    When set to TRUE, the node waits for a primary component for the period of time specified in pc.wait_prim_timeout. This is useful to bring up a non-primary component and make it primary with pc.bootstrap.

    "},{"location":"wsrep-provider-index.html#pcwait_prim_timeout","title":"pc.wait_prim_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: PT30S

    This variable is used to specify the period of time to wait for a primary component.

    "},{"location":"wsrep-provider-index.html#pcwait_restored_prim_timeout","title":"pc.wait_restored_prim_timeout","text":"

    Introduced in Percona XtraDB Cluster 8.0.33-25.

    Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic No Default Value: PT0S

    This variable specifies the wait period for a primary component when the cluster restores the primary component from the gvwstate.dat file after an outage.

    The default value is PT0S (zero seconds). The node waits for an infinite time, which is the current behavior.

    You can define a wait time with PTNS, replace the N value with the number of seconds. For example, to wait for 90 seconds, set the value to PT90S.

    "},{"location":"wsrep-provider-index.html#pcweight","title":"pc.weight","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    This variable specifies the node weight that\u2019s going to be used for Weighted Quorum calculations.

    "},{"location":"wsrep-provider-index.html#protonetbackend","title":"protonet.backend","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: asio

    This variable is used to define which transport backend should be used. Currently only ASIO is supported.

    "},{"location":"wsrep-provider-index.html#protonetversion","title":"protonet.version","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 0

    This status variable is used to check which transport backend protocol version is used.

    "},{"location":"wsrep-provider-index.html#replcausal_read_timeout","title":"repl.causal_read_timeout","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: PT30S

    This variable specifies the causal read timeout.

    "},{"location":"wsrep-provider-index.html#replcommit_order","title":"repl.commit_order","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 3

    This variable is used to specify out-of-order committing (which is used to improve parallel applying performance). The following values are available:

    "},{"location":"wsrep-provider-index.html#replkey_format","title":"repl.key_format","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes Default Value: FLAT8

    This variable is used to specify the replication key format. The following values are available:

    "},{"location":"wsrep-provider-index.html#replmax_ws_size","title":"repl.max_ws_size","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2147483647

    This variable is used to specify the maximum size of a write-set in bytes. This is limited to 2 gygabytes.

    "},{"location":"wsrep-provider-index.html#replproto_max","title":"repl.proto_max","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 7

    This variable is used to specify the highest communication protocol version to accept in the cluster. Used only for debugging.

    "},{"location":"wsrep-provider-index.html#socketchecksum","title":"socket.checksum","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: 2

    This variable is used to choose the checksum algorithm for network packets. The CRC32-C option is optimized and may be hardware accelerated on Intel CPUs. The following values are available:

    The following is an example of the variable use:

    wsrep_provider_options=\"socket.checksum=2\"\n
    "},{"location":"wsrep-provider-index.html#socketssl","title":"socket.ssl","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: No

    This variable is used to specify if SSL encryption should be used.

    "},{"location":"wsrep-provider-index.html#socketssl_ca","title":"socket.ssl_ca","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No

    This variable is used to specify the path to the Certificate Authority (CA) certificate file.

    "},{"location":"wsrep-provider-index.html#socketssl_cert","title":"socket.ssl_cert","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No

    This variable is used to specify the path to the server\u2019s certificate file (in PEM format).

    "},{"location":"wsrep-provider-index.html#socketssl_key","title":"socket.ssl_key","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No

    This variable is used to specify the path to the server\u2019s private key file (in PEM format).

    "},{"location":"wsrep-provider-index.html#socketssl_compression","title":"socket.ssl_compression","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: Yes

    This variable is used to specify if the SSL compression is to be used.

    "},{"location":"wsrep-provider-index.html#socketssl_cipher","title":"socket.ssl_cipher","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: No Default Value: AES128-SHA

    This variable is used to specify what cypher will be used for encryption.

    "},{"location":"wsrep-provider-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-status-index.html","title":"Index of wsrep status variables","text":""},{"location":"wsrep-status-index.html#wsrep_apply_oooe","title":"wsrep_apply_oooe","text":"

    This variable shows parallelization efficiency, how often writests have been applied out of order.

    See also

    Galera status variable: wsrep_apply_oooe

    "},{"location":"wsrep-status-index.html#wsrep_apply_oool","title":"wsrep_apply_oool","text":"

    This variable shows how often a writeset with a higher sequence number was applied before one with a lower sequence number.

    See also

    Galera status variable: wsrep_apply_oool

    "},{"location":"wsrep-status-index.html#wsrep_apply_window","title":"wsrep_apply_window","text":"

    Average distance between highest and lowest concurrently applied sequence numbers.

    See also

    Galera status variable: wsrep_apply_window

    "},{"location":"wsrep-status-index.html#wsrep_causal_reads","title":"wsrep_causal_reads","text":"

    Shows the number of writesets processed while the variable wsrep_causal_reads was set to ON.

    See also

    MySQL wsrep options: wsrep_causal_reads

    "},{"location":"wsrep-status-index.html#wsrep_cert_bucket_count","title":"wsrep_cert_bucket_count","text":"

    This variable, shows the number of cells in the certification index hash-table.

    "},{"location":"wsrep-status-index.html#wsrep_cert_deps_distance","title":"wsrep_cert_deps_distance","text":"

    Average distance between highest and lowest sequence number that can be possibly applied in parallel.

    See also

    Galera status variable: wsrep_cert_deps_distance

    "},{"location":"wsrep-status-index.html#wsrep_cert_index_size","title":"wsrep_cert_index_size","text":"

    Number of entries in the certification index.

    See also

    Galera status variable: wsrep_cert_index_size

    "},{"location":"wsrep-status-index.html#wsrep_cert_interval","title":"wsrep_cert_interval","text":"

    Average number of write-sets received while a transaction replicates.

    See also

    Galera status variable: wsrep_cert_interval

    "},{"location":"wsrep-status-index.html#wsrep_cluster_conf_id","title":"wsrep_cluster_conf_id","text":"

    Number of cluster membership changes that have taken place.

    See also

    Galera status variable: wsrep_cluster_conf_id

    "},{"location":"wsrep-status-index.html#wsrep_cluster_size","title":"wsrep_cluster_size","text":"

    Current number of nodes in the cluster.

    See also

    Galera status variable: wsrep_cluster_size

    "},{"location":"wsrep-status-index.html#wsrep_cluster_state_uuid","title":"wsrep_cluster_state_uuid","text":"

    This variable contains UUID state of the cluster. When this value is the same as the one in wsrep_local_state_uuid, node is synced with the cluster.

    See also

    Galera status variable: wsrep_cluster_state_uuid

    "},{"location":"wsrep-status-index.html#wsrep_cluster_status","title":"wsrep_cluster_status","text":"

    Status of the cluster component. Possible values are:

    See also

    Galera status variable: wsrep_cluster_status

    "},{"location":"wsrep-status-index.html#wsrep_commit_oooe","title":"wsrep_commit_oooe","text":"

    This variable shows how often a transaction was committed out of order.

    See also

    Galera status variable: wsrep_commit_oooe

    "},{"location":"wsrep-status-index.html#wsrep_commit_oool","title":"wsrep_commit_oool","text":"

    This variable currently has no meaning.

    See also

    Galera status variable: wsrep_commit_oool

    "},{"location":"wsrep-status-index.html#wsrep_commit_window","title":"wsrep_commit_window","text":"

    Average distance between highest and lowest concurrently committed sequence number.

    See also

    Galera status variable: wsrep_commit_window

    "},{"location":"wsrep-status-index.html#wsrep_connected","title":"wsrep_connected","text":"

    This variable shows if the node is connected to the cluster. If the value is OFF, the node has not yet connected to any of the cluster components. This may be due to misconfiguration.

    See also

    Galera status variable: wsrep_connected

    "},{"location":"wsrep-status-index.html#wsrep_evs_delayed","title":"wsrep_evs_delayed","text":"

    Comma separated list of nodes that are considered delayed. The node format is <uuid>:<address>:<count>, where <count> is the number of entries on delayed list for that node.

    See also

    Galera status variable: wsrep_evs_delayed

    "},{"location":"wsrep-status-index.html#wsrep_evs_evict_list","title":"wsrep_evs_evict_list","text":"

    List of UUIDs of the evicted nodes.

    See also

    Galera status variable: wsrep_evs_evict_list

    "},{"location":"wsrep-status-index.html#wsrep_evs_repl_latency","title":"wsrep_evs_repl_latency","text":"

    This status variable provides information regarding group communication replication latency. This latency is measured in seconds from when a message is sent out to when a message is received.

    The format of the output is <min>/<avg>/<max>/<std_dev>/<sample_size>.

    See also

    Galera status variable: wsrep_evs_repl_latency

    "},{"location":"wsrep-status-index.html#wsrep_evs_state","title":"wsrep_evs_state","text":"

    Internal EVS protocol state.

    See also

    Galera status variable: wsrep_evs_state

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_interval","title":"wsrep_flow_control_interval","text":"

    This variable shows the lower and upper limits for Galera flow control. The upper limit is the maximum allowed number of requests in the queue. If the queue reaches the upper limit, new requests are denied. As existing requests get processed, the queue decreases, and once it reaches the lower limit, new requests will be allowed again.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_high","title":"wsrep_flow_control_interval_high","text":"

    Shows the upper limit for flow control to trigger.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_interval_low","title":"wsrep_flow_control_interval_low","text":"

    Shows the lower limit for flow control to stop.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_paused","title":"wsrep_flow_control_paused","text":"

    Time since the last status query that was paused due to flow control.

    See also

    Galera status variable: wsrep_flow_control_paused

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_paused_ns","title":"wsrep_flow_control_paused_ns","text":"

    Total time spent in a paused state measured in nanoseconds.

    See also

    Galera status variable: wsrep_flow_control_paused_ns

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_recv","title":"wsrep_flow_control_recv","text":"

    The number of FC_PAUSE events received since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.

    See also

    Galera status variable: wsrep_flow_control_recv

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_requested","title":"wsrep_flow_control_requested","text":"

    This variable returns whether or not a node requested a replication pause.

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_sent","title":"wsrep_flow_control_sent","text":"

    The number of FC_PAUSE events sent since the last status query. Unlike most status variables, this counter does not reset each time you run the query. This counter is reset when the server restarts.

    See also

    Galera status variable: wsrep_flow_control_sent

    "},{"location":"wsrep-status-index.html#wsrep_flow_control_status","title":"wsrep_flow_control_status","text":"

    This variable shows whether a node has flow control enabled for normal traffic. It does not indicate the status of flow control during SST.

    "},{"location":"wsrep-status-index.html#wsrep_gcache_pool_size","title":"wsrep_gcache_pool_size","text":"

    This variable shows the size of the page pool and dynamic memory allocated for GCache (in bytes).

    "},{"location":"wsrep-status-index.html#wsrep_gcomm_uuid","title":"wsrep_gcomm_uuid","text":"

    This status variable exposes UUIDs in gvwstate.dat, which are Galera view IDs (thus unrelated to cluster state UUIDs). This UUID is unique for each node. You will need to know this value when using manual eviction feature.

    See also

    Galera status variable: wsrep_gcomm_uuid

    "},{"location":"wsrep-status-index.html#wsrep_incoming_addresses","title":"wsrep_incoming_addresses","text":"

    Shows the comma-separated list of incoming node addresses in the cluster.

    See also

    Galera status variable: wsrep_incoming_addresses

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_status","title":"wsrep_ist_receive_status","text":"

    This variable displays the progress of IST for joiner node. If IST is not running, the value is blank. If IST is running, the value is the percentage of transfer completed.

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_end","title":"wsrep_ist_receive_seqno_end","text":"

    The sequence number of the last transaction in IST.

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_current","title":"wsrep_ist_receive_seqno_current","text":"

    The sequence number of the current transaction in IST.

    "},{"location":"wsrep-status-index.html#wsrep_ist_receive_seqno_start","title":"wsrep_ist_receive_seqno_start","text":"

    The sequence number of the first transaction in IST.

    "},{"location":"wsrep-status-index.html#wsrep_last_applied","title":"wsrep_last_applied","text":"

    Sequence number of the last applied transaction.

    "},{"location":"wsrep-status-index.html#wsrep_last_committed","title":"wsrep_last_committed","text":"

    Sequence number of the last committed transaction.

    "},{"location":"wsrep-status-index.html#wsrep_local_bf_aborts","title":"wsrep_local_bf_aborts","text":"

    Number of local transactions that were aborted by replica transactions while being executed.

    See also

    Galera status variable: wsrep_local_bf_aborts

    "},{"location":"wsrep-status-index.html#wsrep_local_cached_downto","title":"wsrep_local_cached_downto","text":"

    The lowest sequence number in GCache. This information can be helpful with determining IST and SST. If the value is 0, then it means there are no writesets in GCache (usual for a single node).

    See also

    Galera status variable: wsrep_local_cached_downto

    "},{"location":"wsrep-status-index.html#wsrep_local_cert_failures","title":"wsrep_local_cert_failures","text":"

    Number of writesets that failed the certification test.

    See also

    Galera status variable: wsrep_local_cert_failures

    "},{"location":"wsrep-status-index.html#wsrep_local_commits","title":"wsrep_local_commits","text":"

    Number of writesets commited on the node.

    See also

    Galera status variable: wsrep_local_commits

    "},{"location":"wsrep-status-index.html#wsrep_local_index","title":"wsrep_local_index","text":"

    Node\u2019s index in the cluster.

    See also

    Galera status variable: wsrep_local_index

    "},{"location":"wsrep-status-index.html#wsrep_local_recv_queue","title":"wsrep_local_recv_queue","text":"

    Current length of the receive queue (that is, the number of writesets waiting to be applied).

    See also

    Galera status variable: wsrep_local_recv_queue

    "},{"location":"wsrep-status-index.html#wsrep_local_recv_queue_avg","title":"wsrep_local_recv_queue_avg","text":"

    Average length of the receive queue since the last status query. When this number is bigger than 0 this means node can\u2019t apply writesets as fast as they are received. This could be a sign that the node is overloaded and it may cause replication throttling.

    See also

    Galera status variable: wsrep_local_recv_queue_avg

    "},{"location":"wsrep-status-index.html#wsrep_local_replays","title":"wsrep_local_replays","text":"

    Number of transaction replays due to asymmetric lock granularity.

    See also

    Galera status variable: wsrep_local_replays

    "},{"location":"wsrep-status-index.html#wsrep_local_send_queue","title":"wsrep_local_send_queue","text":"

    Current length of the send queue (that is, the number of writesets waiting to be sent).

    See also

    Galera status variable: wsrep_local_send_queue

    "},{"location":"wsrep-status-index.html#wsrep_local_send_queue_avg","title":"wsrep_local_send_queue_avg","text":"

    Average length of the send queue since the last status query. When cluster experiences network throughput issues or replication throttling, this value will be significantly bigger than 0.

    See also

    Galera status variable: wsrep_local_send_queue_avg

    "},{"location":"wsrep-status-index.html#wsrep_local_state","title":"wsrep_local_state","text":"

    Internal Galera cluster FSM state number.

    See also

    Galera status variable: wsrep_local_state

    "},{"location":"wsrep-status-index.html#wsrep_local_state_comment","title":"wsrep_local_state_comment","text":"

    Internal number and the corresponding human-readable comment of the node\u2019s state. Possible values are:

    Num Comment Description 1 Joining Node is joining the cluster 2 Donor/Desynced Node is the donor to the node joining the cluster 3 Joined Node has joined the cluster 4 Synced Node is synced with the cluster

    See also

    Galera status variable: wsrep_local_state_comment

    "},{"location":"wsrep-status-index.html#wsrep_local_state_uuid","title":"wsrep_local_state_uuid","text":"

    The UUID of the state stored on the node.

    See also

    Galera status variable: wsrep_local_state_uuid

    "},{"location":"wsrep-status-index.html#wsrep_monitor_status","title":"wsrep_monitor_status","text":"

    The status of the local monitor (local and replicating actions), apply monitor (apply actions of write-set), and commit monitor (commit actions of write sets). In the value of this variable, each monitor (L: Local, A: Apply, C: Commit) is represented as a last_entered, and last_left pair:

    wsrep_monitor_status (L/A/C)    [ ( 7, 5), (2, 2), ( 2, 2) ]\n

    last_entered

    Shows which transaction or write-set has recently entered the queue.

    last_left

    Shows which last transaction or write-set has been executed and left the queue.

    According to the Galera protocol, transactions can be applied in parallel but must be committed in a given order. This rule implies that there can be multiple transactions in the apply state at a given point of time but transactions are committed sequentially.

    See also

    Galera Documentation: Database replication

    "},{"location":"wsrep-status-index.html#wsrep_protocol_version","title":"wsrep_protocol_version","text":"

    Version of the wsrep protocol used.

    See also

    Galera status variable: wsrep_protocol_version

    "},{"location":"wsrep-status-index.html#wsrep_provider_name","title":"wsrep_provider_name","text":"

    Name of the wsrep provider (usually Galera).

    See also

    Galera status variable: wsrep_provider_name

    "},{"location":"wsrep-status-index.html#wsrep_provider_vendor","title":"wsrep_provider_vendor","text":"

    Name of the wsrep provider vendor (usually Codership Oy)

    See also

    Galera status variable: wsrep_provider_vendor

    "},{"location":"wsrep-status-index.html#wsrep_provider_version","title":"wsrep_provider_version","text":"

    Current version of the wsrep provider.

    See also

    Galera status variable: wsrep_provider_version

    "},{"location":"wsrep-status-index.html#wsrep_ready","title":"wsrep_ready","text":"

    This variable shows if node is ready to accept queries. If status is OFF, almost all queries will fail with ERROR 1047 (08S01) Unknown Command error (unless the wsrep_on variable is set to 0).

    See also

    Galera status variable: wsrep_ready

    "},{"location":"wsrep-status-index.html#wsrep_received","title":"wsrep_received","text":"

    Total number of writesets received from other nodes.

    See also

    Galera status variable: wsrep_received

    "},{"location":"wsrep-status-index.html#wsrep_received_bytes","title":"wsrep_received_bytes","text":"

    Total size (in bytes) of writesets received from other nodes.

    "},{"location":"wsrep-status-index.html#wsrep_repl_data_bytes","title":"wsrep_repl_data_bytes","text":"

    Total size (in bytes) of data replicated.

    "},{"location":"wsrep-status-index.html#wsrep_repl_keys","title":"wsrep_repl_keys","text":"

    Total number of keys replicated.

    "},{"location":"wsrep-status-index.html#wsrep_repl_keys_bytes","title":"wsrep_repl_keys_bytes","text":"

    Total size (in bytes) of keys replicated.

    "},{"location":"wsrep-status-index.html#wsrep_repl_other_bytes","title":"wsrep_repl_other_bytes","text":"

    Total size of other bits replicated.

    "},{"location":"wsrep-status-index.html#wsrep_replicated","title":"wsrep_replicated","text":"

    Total number of writesets sent to other nodes.

    See also

    Galera status variable: wsrep_replicated

    "},{"location":"wsrep-status-index.html#wsrep_replicated_bytes","title":"wsrep_replicated_bytes","text":"

    Total size of replicated writesets. To compute the actual size of bytes sent over network to cluster peers, multiply the value of this variable by the number of cluster peers in the given network segment.

    See also

    Galera status variable: wsrep_replicated_bytes

    "},{"location":"wsrep-status-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"wsrep-system-index.html","title":"Index of wsrep system variables","text":"

    Percona XtraDB Cluster introduces a number of MySQL system variables related to write-set replication.

    "},{"location":"wsrep-system-index.html#pxc_encrypt_cluster_traffic","title":"pxc_encrypt_cluster_traffic","text":"Option Description Command Line: --pxc-encrypt-cluster-traffic Config File: Yes Scope: Global Dynamic: No Default Value: ON

    Enables automatic configuration of SSL encryption. When disabled, you need to configure SSL manually to encrypt Percona XtraDB Cluster traffic.

    Possible values:

    For more information, see SSL Automatic Configuration.

    "},{"location":"wsrep-system-index.html#pxc_maint_mode","title":"pxc_maint_mode","text":"Option Description Command Line: --pxc-maint-mode Config File: Yes Scope: Global Dynamic: Yes Default Value: DISABLED

    Specifies the maintenance mode for taking a node down without adjusting settings in ProxySQL.

    The following values are available:

    For more information, see Assisted Maintenance Mode.

    "},{"location":"wsrep-system-index.html#pxc_maint_transition_period","title":"pxc_maint_transition_period","text":"Option Description Command Line: --pxc-maint-transition-period Config File: Yes Scope: Global Dynamic: Yes Default Value: 10 (ten seconds)

    Defines the transition period when you change pxc_maint_mode to SHUTDOWN or MAINTENANCE. By default, the period is set to 10 seconds, which should be enough for most transactions to finish. You can increase the value to accommodate for longer-running transactions.

    For more information, see Assisted Maintenance Mode.

    "},{"location":"wsrep-system-index.html#pxc_strict_mode","title":"pxc_strict_mode","text":"Option Description Command Line: --pxc-strict-mode Config File: Yes Scope: Global Dynamic: Yes Default Value: ENFORCING or DISABLED

    Controls PXC Strict Mode, which runs validations to avoid the use of experimental and unsupported features in Percona XtraDB Cluster.

    Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:

    By default, pxc_strict_mode is set to ENFORCING, except if the node is acting as a standalone server or the node is bootstrapping, then pxc_strict_mode defaults to DISABLED.

    Note

    When changing the value of pxc_strict_mode from DISABLED or PERMISSIVE to ENFORCING or MASTER, ensure that the following configuration is used:

    The SERIALIZABLE method of isolation is not allowed in ENFORCING mode.

    For more information, see PXC Strict Mode.

    "},{"location":"wsrep-system-index.html#wsrep_applier_fk_checks","title":"wsrep_applier_FK_checks","text":"Option Description Command Line: --wsrep-applier-FK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_FK_checks variable is deprecated in favor of this variable.

    Defines whether foreign key checking is done for applier threads. This is enabled by default.

    See also

    MySQL wsrep option: wsrep_applier_FK_checks

    "},{"location":"wsrep-system-index.html#wsrep_applier_threads","title":"wsrep_applier_threads","text":"Option Description Command Line: --wsrep-applier-threads Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_threads variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads variable.

    Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.

    Note

    When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).

    If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1 to see if that resolves the issue. The default value can be increased for better throughput.

    You may want to increase it as suggested in Codership documentation for flow control: when the node is in JOINED state, increasing the number of replica threads can speed up the catchup to SYNCED.

    You can also estimate the optimal value for this from wsrep_cert_deps_distance as suggested in the Galera Cluster documentation.

    For more configuration tips, see Setting Parallel Slave Threads`.

    See also

    MySQL wsrep option: wsrep_applier_threads

    "},{"location":"wsrep-system-index.html#wsrep_applier_uk_checks","title":"wsrep_applier_UK_checks","text":"Option Description Command Line: --wsrep-applier-UK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_slave_UK_checks variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks variable.

    Defines whether unique key checking is done for applier threads. This is disabled by default.

    See also

    MySQL wsrep option: wsrep_applier_UK_checks

    "},{"location":"wsrep-system-index.html#wsrep_auto_increment_control","title":"wsrep_auto_increment_control","text":"Option Description Command Line: --wsrep-auto-increment-control Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    Enables automatic adjustment of auto-increment system variables depending on the size of the cluster:

    This helps prevent auto-increment replication conflicts across the cluster by giving each node its own range of auto-increment values. It is enabled by default.

    Automatic adjustment may not be desirable depending on application\u2019s use and assumptions of auto-increments. It can be disabled in source-replica clusters.

    See also

    MySQL wsrep option: wsrep_auto_increment_control

    "},{"location":"wsrep-system-index.html#wsrep_causal_reads","title":"wsrep_causal_reads","text":"Option Description Command Line: --wsrep-causal-reads Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: OFF

    In some cases, the source may apply events faster than a replica, which can cause source and replica to become out of sync for a brief moment. When this variable is set to ON, the replica will wait until that event is applied before doing any other queries. Enabling this variable will result in larger latencies.

    Note

    This variable was deprecated because enabling it is the equivalent of setting wsrep_sync_wait to 1.

    See also

    MySQL wsrep option: wsrep_causal_reads

    "},{"location":"wsrep-system-index.html#wsrep_certification_rules","title":"wsrep_certification_rules","text":"Option Description Command Line: --wsrep-certification-rules Config File: Yes Scope: Global Dynamic: Yes Values: STRICT, OPTIMIZED Default Value: STRICT

    This variable controls how certification is done in the cluster, in particular this affects how foreign keys are handled.

    STRICT Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table may result in the certification failure.

    OPTIMIZED Two INSERTs that happen at about the same time on two different nodes in a child table, that insert different (non conflicting rows), but both rows point to the same row in the parent table will not result in the certification failure.

    See also

    Galera Cluster Documentation: MySQL wsrep options

    "},{"location":"wsrep-system-index.html#wsrep_certify_nonpk","title":"wsrep_certify_nonPK","text":"Option Description Command Line: --wsrep-certify-nonpk Config File: Yes Scope: Global Dynamic: No Default Value: ON

    Enables automatic generation of primary keys for rows that don\u2019t have them. Write set replication requires primary keys on all tables to allow for parallel applying of transactions. This variable is enabled by default. As a rule, make sure that all tables have primary keys.

    See also

    MySQL wsrep option: wsrep_certify_nonPK

    "},{"location":"wsrep-system-index.html#wsrep_cluster_address","title":"wsrep_cluster_address","text":"Option Description Command Line: --wsrep-cluster-address Config File: Yes Scope: Global Dynamic: Yes

    Defines the back-end schema, IP addresses, ports, and options that the node uses when connecting to the cluster. This variable needs to specify at least one other node\u2019s address, which is alive and a member of the cluster. In practice, it is best (but not necessary) to provide a complete list of all possible cluster nodes. The value should be of the following format:

    <schema>://<address>[?<option1>=<value1>[&<option2>=<value2>]],...\n

    The only back-end schema currently supported is gcomm. The IP address can contain a port number after a colon. Options are specified after ? and separated by &. You can specify multiple addresses separated by commas.

    For example:

    wsrep_cluster_address=\"gcomm://192.168.0.1:4567?gmcast.listen_addr=0.0.0.0:5678\"\n

    If an empty gcomm:// is provided, the node will bootstrap itself (that is, form a new cluster). It is not recommended to have empty cluster address in production config after the cluster has been bootstrapped initially. If you want to bootstrap a new cluster with a node, you should pass the --wsrep-new-cluster option when starting.

    See also

    MySQL wsrep option: wsrep_cluster_address

    "},{"location":"wsrep-system-index.html#wsrep_cluster_name","title":"wsrep_cluster_name","text":"Option Description Command Line: --wsrep-cluster-name Config File: Yes Scope: Global Dynamic: No Default Value: my_wsrep_cluster

    Specifies the name of the cluster and must be identical on all nodes. A node checks the value when attempting to connect to the cluster. If the names match, the node connects.

    Edit the value in the my.cnf in the [galera] section.

    [galera]\n\n    wsrep_cluster_name=simple-cluster\n

    Execute SHOW VARIABLES with the LIKE operator to view the variable:

    mysql> SHOW VARIABLES LIKE 'wsrep_cluster_name';\n
    Expected output
    +--------------------+----------------+\n| Variable_name      | Value          |\n+--------------------+----------------+\n| wsrep_cluster_name | simple-cluster |\n+--------------------+----------------+\n

    Note

    It should not exceed 32 characters. A node cannot join the cluster if the cluster names do not match. You must re-bootstrap the cluster after a name change.

    See also

    MySQL wsrep option: wsrep_cluster_name

    "},{"location":"wsrep-system-index.html#wsrep_data_home_dir","title":"wsrep_data_home_dir","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: No Default Value: /var/lib/mysql (or whatever path is specified by datadir)

    Specifies the path to the directory where the wsrep provider stores its files (such as grastate.dat).

    See also

    MySQL wsrep option: wsrep_data_home_dir

    "},{"location":"wsrep-system-index.html#wsrep_dbug_option","title":"wsrep_dbug_option","text":"Option Description Command Line: --wsrep-dbug-option Config File: Yes Scope: Global Dynamic: Yes

    Defines DBUG options to pass to the wsrep provider.

    See also

    MySQL wsrep option: wsrep_dbug_option

    "},{"location":"wsrep-system-index.html#wsrep_debug","title":"wsrep_debug","text":"Option Description Command Line: --wsrep-debug Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE

    Enables debug level logging for the database server and wsrep-lib - an integration library for WSREP API with additional convenience for transaction processing. By default, --wsrep-debug variable is disabled.

    This variable can be used when trying to diagnose problems or when submitting a bug.

    You can set wsrep_debug in the following my.cnf groups:

    This variable may be set to one of the following values:

    NONE

    No debug-level messages.

    SERVER

    wsrep-lib general debug-level messages and detailed debug-level messages from the server_state part are printed out. Galera debug-level logs are printed out.

    TRANSACTION

    Same as SERVER + wsrep-lib transaction part

    STREAMING

    Same as TRANSACTION + wsrep-lib streaming part

    CLIENT

    Same as STREAMING + wsrep-lib client_service part

    Note

    Do not enable debugging in production environments, because it logs authentication info (that is, passwords).

    See also

    MySQL wsrep option: wsrep_debug

    "},{"location":"wsrep-system-index.html#wsrep_desync","title":"wsrep_desync","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    Defines whether the node should participate in Flow Control. By default, this variable is disabled, meaning that if the receive queue becomes too big, the node engages in Flow Control: it works through the receive queue until it reaches a more manageable size. For more information, see wsrep_local_recv_queue and wsrep_flow_control_interval.

    Enabling this variable will disable Flow Control for the node. It will continue to receive write-sets that it is not able to apply, the receive queue will keep growing, and the node will keep falling behind the cluster indefinitely.

    Toggling this back to OFF will require an IST or an SST, depending on how long it was desynchronized. This is similar to cluster desynchronization, which occurs during RSU TOI. Because of this, it\u2019s not a good idea to enable wsrep_desync for a long period of time or for several nodes at once.

    Note

    You can also desync a node using the /\\*! WSREP_DESYNC \\*/ query comment.

    See also

    MySQL wsrep option: wsrep_desync

    "},{"location":"wsrep-system-index.html#wsrep_dirty_reads","title":"wsrep_dirty_reads","text":"Option Description Command Line: --wsrep-dirty-reads Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: OFF

    Defines whether the node accepts read queries when in a non-operational state, that is, when it loses connection to the Primary Component. By default, this variable is disabled and the node rejects all queries, because there is no way to tell if the data is correct.

    If you enable this variable, the node will permit read queries (USE, SELECT, LOCK TABLE, and UNLOCK TABLES), but any command that modifies or updates the database on a non-operational node will still be rejected (including DDL and DML statements, such as INSERT, DELETE, and UPDATE).

    To avoid deadlock errors, set the wsrep_sync_wait variable to 0 if you enable wsrep_dirty_reads.

    As of Percona XtraDB Cluster 8.0.26-16, you can update the variable with a set_var hint.

    mysql> SELECT @@wsrep_dirty_reads;\n
    Expected output
    +-----------------------+\n| @@wsrep_dirty_reads   |\n+=======================+\n| OFF                   |\n+-----------------------+\n
    mysql> SELECT /*+ SET_VAR(wsrep_dirty_reads=ON) */ @@wsrep_dirty_reads;\n
    Expected output
    +-----------------------+\n| @@wsrep_dirty_reads   |\n+=======================+\n| ON                    |\n+-----------------------+\n

    See also

    MySQL wsrep option: wsrep_dirty_reads

    "},{"location":"wsrep-system-index.html#wsrep_drupal_282555_workaround","title":"wsrep_drupal_282555_workaround","text":"Option Description Command Line: --wsrep-drupal-282555-workaround Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    Enables a workaround for MySQL InnoDB bug that affects Drupal (Drupal bug #282555 and MySQL bug #41984). In some cases, duplicate key errors would occur when inserting the DEFAULT value into an AUTO_INCREMENT column.

    See also

    MySQL wsrep option: wsrep_drupal_282555_workaround

    "},{"location":"wsrep-system-index.html#wsrep_forced_binlog_format","title":"wsrep_forced_binlog_format","text":"Option Description Command Line: --wsrep-forced-binlog-format Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE

    Defines a binary log format that will always be effective, regardless of the client session binlog_format variable value.

    Possible values for this variable are:

    See also

    MySQL wsrep option: wsrep_forced_binlog_format

    "},{"location":"wsrep-system-index.html#wsrep_ignore_apply_errors","title":"wsrep_ignore_apply_errors","text":"Option Description Command Line: --wsrep-ignore-apply-errors Config File: Yes Scope: Global Dynamic: Yes Default Value: 0

    Defines the rules of wsrep applier behavior on errors. You can change the settings by editing the my.cnf file under [mysqld] or at runtime.

    Note

    In Percona XtraDB Cluster version 8.0.19-10, the default value has changed from 7 to 0. If you have been working with an earlier version of the PXC 8.0 series, you may see different behavior when upgrading to this version or later.

    The variable has the following options:

    Value Description WSREP_IGNORE_ERRORS_NONE All replication errors are treated as errors and will shutdown the node (default behavior) WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL DROP DATABASE, DROP TABLE, DROP INDEX, ALTER TABLE are converted to a warning if they result in ER_DB_DROP_EXISTS, ER_BAD_TABLE_ERROR OR ER_CANT_DROP_FIELD_OR_KEY errors WSREP_IGNORE_ERRORS_ON_RECONCILING_DML DELETE events are treated as warnings if they failed because the deleted row was not found (ER_KEY_NOT_FOUND) WSREP_IGNORE_ERRORS_ON_DDL All DDL errors will be treated as a warning WSREP_IGNORE_ERRORS_MAX Infers WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML and WSREP_IGNORE_ERRORS_ON_DDL

    Setting the variable between 0 and 7 has the following behavior:

    Setting Behavior 0 WSREP_IGNORE_ERRORS_NONE 1 WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL 2 WSREP_IGNORE_ERRORS_ON_RECONCILING_DML 3 WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML 4 WSREP_IGNORE_ERRORS_ON_DDL 5 WSREP_IGNORE_ERRORS_ON_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL 6 WSREP_IGNORE_ERRORS_ON_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML 7 WSREP_IGNORE_ERRORS_ON_DDL, WSREP_IGNORE_ERRORS_ON_RECONCILING_DML, WSREP_IGNORE_ERRORS_ON_RECONCILING_DDL"},{"location":"wsrep-system-index.html#wsrep_min_log_verbosity","title":"wsrep_min_log_verbosity","text":"Option Description Command Line: --wsrep-min-log-verbosity Config File: Yes Scope: Global Dynamic: Yes Default Value: 3

    This variable defines the minimum logging verbosity of wsrep/Galera and acts in conjunction with the log_error_verbosity variable. The wsrep_min_log_verbosity has the same values as log_error_verbosity.

    The actual log verbosity of wsrep/Galera can be greater than the value of wsrep_min_log_verbosity if log_error_verbosity is greater than wsrep_min_log_verbosity.

    A few examples:

    log_error_verbosity wsrep_min_log_verbosity MySQL Logs Verbosity wsrep Logs Verbosity 2 3 system error, warning system error, warning, info 1 3 system error system error, warning, info 1 2 system error system error, warning 3 1 system error, warning, info system error, warning, info

    Note the case where log_error_verbosity=3 and wsrep_min_log_verbosity=1. The actual log verbosity of wsrep/Galera is 3 (system error, warning, info) because log_error_verbosity is greater.

    See also

    MySQL Documentation: log_error_verbosity

    Galera Cluster Documentation: Database Server Logs

    "},{"location":"wsrep-system-index.html#wsrep_load_data_splitting","title":"wsrep_load_data_splitting","text":"Option Description Command Line: --wsrep-load-data-splitting Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    Defines whether the node should split large LOAD DATA transactions. This variable is enabled by default, meaning that LOAD DATA commands are split into transactions of 10 000 rows or less.

    If you disable this variable, then huge data loads may prevent the node from completely rolling the operation back in the event of a conflict, and whatever gets committed stays committed.

    Note

    It doesn\u2019t work as expected with autocommit=0 when enabled.

    See also

    MySQL wsrep option: wsrep_load_data_splitting

    "},{"location":"wsrep-system-index.html#wsrep_log_conflicts","title":"wsrep_log_conflicts","text":"Option Description Command Line: --wsrep-log-conflicts Config File: Yes Scope: Global Dynamic: No Default Value: OFF

    Defines whether the node should log additional information about conflicts. By default, this variable is disabled and Percona XtraDB Cluster uses standard logging features in MySQL.

    If you enable this variable, it will also log table and schema where the conflict occurred, as well as the actual values for keys that produced the conflict.

    See also

    MySQL wsrep option: wsrep_log_conflicts

    "},{"location":"wsrep-system-index.html#wsrep_max_ws_rows","title":"wsrep_max_ws_rows","text":"Option Description Command Line: --wsrep-max-ws-rows Config File: Yes Scope: Global Dynamic: Yes Default Value: 0 (no limit)

    Defines the maximum number of rows each write-set can contain.

    By default, there is no limit for the maximum number of rows in a write-set. The maximum allowed value is 1048576.

    See also

    MySQL wsrep option: wsrep_max_ws_rows

    "},{"location":"wsrep-system-index.html#wsrep_max_ws_size","title":"wsrep_max_ws_size","text":"Option Description Command Line: --wsrep_max_ws_size Config File: Yes Scope: Global Dynamic: Yes Default Value: 2147483647 (2 GB)

    Defines the maximum write-set size (in bytes). Anything bigger than the specified value will be rejected.

    You can set it to any value between 1024 and the default 2147483647.

    See also

    MySQL wsrep option: wsrep_max_ws_size

    "},{"location":"wsrep-system-index.html#wsrep_mode","title":"wsrep_mode","text":"Option Description Command Line: --wsrep-mode Config File: Yes Scope: Global Dynamic: Yes Default Value:

    This variable has been implemented in Percona XtraDB Cluster 8.0.31.

    Defines the node behavior according to a specified value. The value is empty or disabled by default.

    The available values are:

    See also

    MySQL wsrep option: wsrep_mode

    "},{"location":"wsrep-system-index.html#wsrep_node_address","title":"wsrep_node_address","text":"Option Description Command Line: --wsrep-node-address Config File: Yes Scope: Global Dynamic: No Default Value: IP of the first network interface (eth0) and default port (4567)

    Specifies the network address of the node. By default, this variable is set to the IP address of the first network interface (usually eth0 or enp2s0) and the default port (4567).

    While default value should be correct in most cases, there are situations when you need to specify it manually. For example:

    The value should be specified in the following format:

    <ip_address>[:port]\n

    Note

    The value of this variable is also used as the default value for the wsrep_sst_receive_address variable and the ist.recv_addr option.

    See also

    MySQL wsrep option: wsrep_node_address

    "},{"location":"wsrep-system-index.html#wsrep_node_incoming_address","title":"wsrep_node_incoming_address","text":"Option Description Command Line: --wsrep-node-incoming-address Config File: Yes Scope: Global Dynamic: No Default Value: AUTO

    Specifies the network address from which the node expects client connections. By default, it uses the IP address from wsrep_node_address and port number 3306.

    This information is used for the wsrep_incoming_addresses variable which shows all active cluster nodes.

    See also

    MySQL wsrep option: wsrep_node_incoming_address

    "},{"location":"wsrep-system-index.html#wsrep_node_name","title":"wsrep_node_name","text":"Option Description Command Line: --wsrep-node-name Config File: Yes Scope: Global Dynamic: Yes Default Value: The node\u2019s host name

    Defines a unique name for the node. Defaults to the host name.

    In many situations, you may use the value of this variable as a means to identify the given node in the cluster as the alternative to using the node address (the value of the wsrep_node_address).

    Note

    The variable wsrep_sst_donor is an example where you may only use the value of wsrep_node_name and the node address is not permitted.

    "},{"location":"wsrep-system-index.html#wsrep_notify_cmd","title":"wsrep_notify_cmd","text":"Option Description Command Line: --wsrep-notify-cmd Config File: Yes Scope: Global Dynamic: No

    Specifies the notification command that the node should execute whenever cluster membership or local node status changes. This can be used for alerting or to reconfigure load balancers.

    Note

    The node will block and wait until the command or script completes and returns before it can proceed. If the script performs any potentially blocking or long-running operations, such as network communication, you should consider initiating such operations in the background and have the script return immediately.

    See also

    MySQL wsrep option: wsrep_notify_cmd

    "},{"location":"wsrep-system-index.html#wsrep_on","title":"wsrep_on","text":"Option Description Command Line: No Config File: No Scope: Session Dynamic: Yes Default Value: ON

    Defines if current session transaction changes for a node are replicated to the cluster.

    If set to OFF for a session, no transaction changes are replicated in that session. The setting does not cause the node to leave the cluster, and the node communicates with other nodes.

    See also

    MySQL wsrep option: wsrep_on

    "},{"location":"wsrep-system-index.html#wsrep_osu_method","title":"wsrep_OSU_method","text":"Option Description Command Line: --wsrep-OSU-method Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: TOI

    Defines the method for Online Schema Upgrade that the node uses to replicate DDL statements.

    For information on the available methods, see Online Schema upgrade and for information on Non-blocking operations, see NBO.

    See also

    MySQL wsrep option: wsrep_OSU_method

    "},{"location":"wsrep-system-index.html#wsrep_provider","title":"wsrep_provider","text":"Option Description Command Line: --wsrep-provider Config File: Yes Scope: Global Dynamic: No

    Specifies the path to the Galera library. This is usually /usr/lib64/libgalera_smm.so on CentOS/RHEL and /usr/lib/libgalera_smm.so on Debian/Ubuntu.

    If you do not specify a path or the value is not valid, the node will behave as standalone instance of MySQL.

    See also

    MySQL wsrep option: wsrep_provider

    "},{"location":"wsrep-system-index.html#wsrep_provider_options","title":"wsrep_provider_options","text":"Option Description Command Line: --wsrep-provider-options Config File: Yes Scope: Global Dynamic: No

    Specifies optional settings for the replication provider documented in Index of :variable:`wsrep_provider` options. These options affect how various situations are handled during replication.

    See also

    MySQL wsrep option: wsrep_provider_options

    "},{"location":"wsrep-system-index.html#wsrep_recover","title":"wsrep_recover","text":"Option Description Command Line: --wsrep-recover Config File: Yes Scope: Global Dynamic: No Default Value: OFF Location: mysqld_safe`

    Recovers database state after crash by parsing GTID from the log. If the GTID is found, it will be assigned as the initial position for server.

    "},{"location":"wsrep-system-index.html#wsrep_reject_queries","title":"wsrep_reject_queries","text":"Option Description Command Line: No Config File: Yes Scope: Global Dynamic: Yes Default Value: NONE

    Defines whether the node should reject queries from clients. Rejecting queries can be useful during upgrades, when you want to keep the node up and apply write-sets without accepting queries.

    When a query is rejected, the following error is returned:

    Error 1047: Unknown command\n

    The following values are available:

    Note

    This variable doesn\u2019t affect Galera replication in any way, only the applications that connect to the database are affected. If you want to desync a node, use wsrep_desync.

    See also

    MySQL wsrep option: wsrep_reject_queries

    "},{"location":"wsrep-system-index.html#wsrep_replicate_myisam","title":"wsrep_replicate_myisam","text":"Option Description Command Line: --wsrep-replicate-myisam Config File: Yes Scope: Session, Global Dynamic: No Default Value: OFF

    Defines whether DML statements for MyISAM tables should be replicated. It is disabled by default, because MyISAM replication is still experimental.

    On the global level, wsrep_replicate_myisam can be set only during startup. On session level, you can change it during runtime as well.

    For older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended when wsrep_replicate_myisam is disabled, since if any table in the list is MyISAM, the whole DDL statement is not put under TOI.

    Note

    You should keep in mind the following when using MyISAM replication:

    "},{"location":"wsrep-system-index.html#wsrep_restart_replica","title":"wsrep_restart_replica","text":"Option Description Command Line: --wsrep-restart-replica Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave variable is deprecated in favor of this variable.

    Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.

    See also

    MySQL wsrep option: wsrep_restart_slave

    "},{"location":"wsrep-system-index.html#wsrep_restart_slave","title":"wsrep_restart_slave","text":"Option Description Command Line: --wsrep-restart-slave Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, the wsrep_restart_slave variable is deprecated and may be removed in later versions. Use wsrep_restart_replica.

    Defines whether replication replica should be restarted when the node joins back to the cluster. Enabling this can be useful because asynchronous replication replica thread is stopped when the node tries to apply the next replication event while the node is in non-primary state.

    "},{"location":"wsrep-system-index.html#wsrep_retry_autocommit","title":"wsrep_retry_autocommit","text":"Option Description Command Line: --wsrep-retry-autocommit Config File: Yes Scope: Global Dynamic: No Default Value: 1

    Specifies the number of times autocommit transactions will be retried in the cluster if it encounters certification errors. In case there is a conflict, it should be safe for the cluster node to simply retry the statement without returning an error to the client, hoping that it will pass next time.

    This can be useful to help an application using autocommit to avoid deadlock errors that can be triggered by replication conflicts.

    If this variable is set to 0, autocommit transactions won\u2019t be retried.

    See also

    MySQL wsrep option: wsrep_retry_autocommit

    "},{"location":"wsrep-system-index.html#wsrep_rsu_commit_timeout","title":"wsrep_RSU_commit_timeout","text":"Option Description Command Line: --wsrep-RSU-commit-timeout Config File: Yes Scope: Global Dynamic: Yes Default Value: 5000 Range: From 5000 (5 milliseconds) to 31536000000000 (365 days)

    Specifies the timeout in microseconds to allow active connection to complete COMMIT action before starting RSU.

    While running RSU it is expected that user has isolated the node and there is no active traffic executing on the node. RSU has a check to ensure this, and waits for any active connection in COMMIT state before starting RSU.

    By default this check has timeout of 5 milliseconds, but in some cases COMMIT is taking longer. This variable sets the timeout, and has allowed values from the range of (5 milliseconds, 365 days). The value is to be set in microseconds. Unit of variable is in micro-secs so set accordingly.

    Note

    RSU operation will not auto-stop node from receiving active traffic. So there could be a continuous flow of active traffic while RSU continues to wait, and that can result in RSU starvation. User is expected to block active RSU traffic while performing operation.

    "},{"location":"wsrep-system-index.html#wsrep_slave_fk_checks","title":"wsrep_slave_FK_checks","text":"Option Description Command Line: --wsrep-slave-FK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: ON

    As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_FK_checks variable.

    Defines whether foreign key checking is done for applier threads. This is enabled by default.

    "},{"location":"wsrep-system-index.html#wsrep_slave_threads","title":"wsrep_slave_threads","text":"Option Description Command Line: --wsrep-slave-threads Config File: Yes Scope: Global Dynamic: Yes Default Value: 1

    As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_threads variable.

    Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. This variable is dynamic. You can increase/decrease it at any time.

    Note

    When you decrease the number of threads, it won\u2019t kill the threads immediately, but stop them after they are done applying current transaction (the effect with an increase is immediate though).

    If any replication consistency problems are encountered, it\u2019s recommended to set this back to 1 to see if that resolves the issue. The default value can be increased for better throughput.

    You may want to increase it as suggested in Codership documentation for flow control: when the node is in JOINED state, increasing the number of replica threads can speed up the catchup to SYNCED.

    You can also estimate the optimal value for this from wsrep_cert_deps_distance as suggested in the Galera Cluster documentation.

    For more configuration tips, see this document.

    "},{"location":"wsrep-system-index.html#wsrep_slave_uk_checks","title":"wsrep_slave_UK_checks","text":"Option Description Command Line: --wsrep-slave-UK-checks Config File: Yes Scope: Global Dynamic: Yes Default Value: OFF

    As of Percona XtraDB Cluster 8.0.26-16, this variable is deprecated and may be removed in a later version. Use the wsrep_applier_UK_checks variable.

    Defines whether unique key checking is done for applier threads. This is disabled by default.

    "},{"location":"wsrep-system-index.html#wsrep_sr_store","title":"wsrep_SR_store","text":"Option Description Command Line: --wsrep-sr-store Config File: Yes Scope: Global Dynamic: No Default Value: table

    Defines storage for streaming replication fragments. The available values are table, the default value, and none, which disables the variable.

    "},{"location":"wsrep-system-index.html#wsrep_sst_allowed_methods","title":"wsrep_sst_allowed_methods","text":"Option Description Command Line: --wsrep_sst_allowed_methods Config File: Yes Scope: Global Dynamic: No Default Value: xtrabackup-v2

    Percona XtraDB Cluster 8.0.20-11.3 adds this variable.

    This variable limits SST methods accepted by the server for wsrep_sst_method variable. The default value is xtrabackup-v2.

    "},{"location":"wsrep-system-index.html#wsrep_sst_donor","title":"wsrep_sst_donor","text":"Option Description Command Line: Yes Config File: Yes Scope: Global Dynamic: Yes

    Specifies a list of nodes (using their wsrep_node_name values) that the current node should prefer as donors for SST and IST.

    Warning

    Using IP addresses of nodes instead of node names (the value of wsrep_node_name) as values of wsrep_sst_donor results in an error.

    ERROR] WSREP: State transfer request failed unrecoverably: 113 (No route\nto host). Most likely it is due to inability to communicate with the\ncluster primary component. Restart required.\n

    If the value is empty, the first node in SYNCED state in the index becomes the donor and will not be able to serve requests during the state transfer.

    To consider other nodes if the listed nodes are not available, add a comma at the end of the list, for example:

    wsrep_sst_donor=node1,node2,\n

    If you remove the trailing comma from the previous example, then the joining node will consider only node1 and node2.

    Note

    By default, the joiner node does not wait for more than 100 seconds to receive the first packet from a donor. This is implemented via the sst-initial-timeout option. If you set the list of preferred donors without the trailing comma or believe that all nodes in the cluster can often be unavailable for SST (this is common for small clusters), then you may want to increase the initial timeout (or disable it completely if you don\u2019t mind the joiner node waiting for the state transfer indefinitely).

    See also

    MySQL wsrep option: wsrep_sst_donor

    "},{"location":"wsrep-system-index.html#wsrep_sst_method","title":"wsrep_sst_method","text":"Option Description Command Line: --wsrep-sst-method Config File: Yes Scope: Global Dynamic: Yes Default Value: xtrabackup-v2

    Defines the method or script for State Snapshot Transfer (SST).

    Available values are:

    Note

    xtrabackup-v2 provides support for clusters with GTIDs and async replicas.

    See also

    MySQL wsrep option: wsrep_sst_method

    "},{"location":"wsrep-system-index.html#wsrep_sst_receive_address","title":"wsrep_sst_receive_address","text":"Option Description Command Line: --wsrep-sst-receive-address Config File: Yes Scope: Global Dynamic: Yes Default Value: AUTO

    Specifies the network address where donor node should send state transfers. By default, this variable is set to AUTO, meaning that the IP address from wsrep_node_address is used.

    See also

    MySQL wsrep option: wsrep_sst_receive_address

    "},{"location":"wsrep-system-index.html#wsrep_start_position","title":"wsrep_start_position","text":"Option Description Command Line: --wsrep-start-position Config File: Yes Scope: Global Dynamic: Yes Default Value: 00000000-0000-0000-0000-00000000000000:-1

    Specifies the node\u2019s start position as UUID:seqno. By setting all the nodes to have the same value for this variable, the cluster can be set up without the state transfer.

    See also

    MySQL wsrep option: wsrep_start_position

    "},{"location":"wsrep-system-index.html#wsrep_sync_wait","title":"wsrep_sync_wait","text":"Option Description Command Line: --wsrep-sync-wait Config File: Yes Scope: Session, Global Dynamic: Yes Default Value: 0

    Controls cluster-wide causality checks on certain statements. Checks ensure that the statement is executed on a node that is fully synced with the cluster.

    As of Percona XtraDB Cluster 8.0.26-16, you are able to update the variable with a set_var hint.

       mysql> SELECT @@wsrep_sync_wait;\n
    Expected output
    +---------------------+\n| @@wsrep_sync_wait   |\n+=====================+\n| 3                   |\n+---------------------+\n
       mysql> SELECT /*+ SET_VAR(wsrep_sync_wait=7) */ @@wsrep_sync_wait;\n
    Expected output
    +---------------------+\n| @@wsrep_sync_wait   |\n+=====================+\n| 7                   |\n+---------------------+\n

    Note

    Causality checks of any type can result in increased latency.

    The type of statements to undergo checks is determined by bitmask:

    Note

    Setting wsrep_sync_wait to 1 is the equivalent of setting the deprecated wsrep_causal_reads to ON.

    See also

    MySQL wsrep option: wsrep_sync_wait

    "},{"location":"wsrep-system-index.html#wsrep_trx_fragment_size","title":"wsrep_trx_fragment_size","text":"Option Description Command Line: --wsrep-trx-fragment-size Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: 0

    Defines the the streaming replication fragment size. This variable is measured in the value defined by wsrep_trx_fragment_unit. The minimum value is 0 and the maximum value is 2147483647.

    As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.

    mysql> SELECT @@@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_unit    |\n+==============================+\n| statements                   |\n+------------------------------+\n| @@wsrep_trx_fragment_size    |\n+------------------------------+\n| 3                            |\n+------------------------------+\n
    mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_size=5) */ @@wsrep_trx_fragment_size;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_size    |\n+==============================+\n| 5                            |\n+------------------------------+\n

    You can also use set_var() in a data manipulation language (DML) statement. This ability is useful when streaming large statements within a transaction.

    node1> BEGIN;\nQuery OK, 0 rows affected (0.00 sec)\n\nnode1> INSERT /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ INTO t1 SELECT * FROM t1; \nQuery OK, 65536 rows affected (15.15 sec)\nRecords: 65536 Duplicates: 0 Warnings: 0\n\nnode1> UPDATE /*+SET_VAR(wsrep_trx_fragment_size = 100)*/ t1 SET i=2;\nQuery OK, 131072 rows affected (1 min 35.93 sec)\nRows matched: 131072 Changed: 131072 Warnings: 0\n\nnode2> SET SESSION TRANSACTION_ISOLATION = 'READ-UNCOMMITTED';\nQuery OK, 0 rows affected (0.00 sec)\n\nnode2> SELECT * FROM t1 LIMIT 5;\n+---+\n| i |\n+===+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\n| 2 |\n+---+\nnode1> DELETE  /*+SET_VAR(wsrep_trx_fragment_size = 10000)*/ FROM t1;\nQuery OK, 131072 rows affected (15.09 sec)\n
    "},{"location":"wsrep-system-index.html#wsrep_trx_fragment_unit","title":"wsrep_trx_fragment_unit","text":"Option Description Command Line: --wsrep-trx-fragment-unit Config File: Yes Scope: Global, Session Dynamic: Yes Default Value: \u201cbytes\u201d

    Defines the type of measure for the wsrep_trx_fragment_size. The possible values are: bytes, rows, statements.

    As of Percona XtraDB Cluster for MySQL 8.0.26-16, you can update the variable with a set_var hint.

    mysql> SELECT @@wsrep_trx_fragment_unit; SELECT @@wsrep_trx_fragment_size;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_unit    |\n+==============================+\n| statements                   |\n+------------------------------+\n| @@wsrep_trx_fragment_size    |\n+------------------------------+\n| 3                            |\n+------------------------------+\n
    mysql> SELECT /*+ SET_VAR(wsrep_trx_fragment_unit=rows) */ @@wsrep_trx_fragment_unit;\n
    Expected output
    +------------------------------+\n| @@wsrep_trx_fragment_unit    |\n+==============================+\n| rows                         |\n+------------------------------+\n
    "},{"location":"wsrep-system-index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"xtrabackup-sst.html","title":"Percona XtraBackup SST configuration","text":"

    Percona XtraBackup SST works in two stages:

    1. First it identifies the type of data transfer based on the presence of xtrabackup_ist file on the joiner node.

    2. Then it starts data transfer. In case of SST, it empties the data directory except for some files (galera.cache, sst_in_progress, grastate.dat) and then proceeds with SST.

      In case of IST, it proceeds as before.

    "},{"location":"xtrabackup-sst.html#sst-options","title":"SST options","text":"

    The following options specific to SST can be used in my.cnf under [sst].

    Note

    "},{"location":"xtrabackup-sst.html#streamfmt","title":"streamfmt","text":"Parameter Description Values: xbstream Default: xbstream Match: Yes

    Used to specify the Percona XtraBackup streaming format. The only option is the xbstream format. SST fails and generates an error when another format, such as tar, is used.

    For more information about the xbstream format, see The xbstream Binary.

    "},{"location":"xtrabackup-sst.html#transferfmt","title":"transferfmt","text":"Parameter Description Values: socat, nc Default: socat Match: Yes

    Used to specify the data transfer format. The recommended value is the default transferfmt=socat because it allows for socket options, such as transfer buffer sizes. For more information, see socat(1).

    Note

    Using transferfmt=nc does not support the SSL-based encryption mode (value 4 for the encrypt option).

    "},{"location":"xtrabackup-sst.html#ssl-ca","title":"ssl-ca","text":"

    Example: ssl-ca=/etc/ssl/certs/mycert.crt

    Specifies the absolute path to the certificate authority (CA) file for socat encryption based on OpenSSL.

    "},{"location":"xtrabackup-sst.html#ssl-cert","title":"ssl-cert","text":"

    Example: ssl-cert=/etc/ssl/certs/mycert.pem

    Specifies the full path to the certificate file in the PEM format for socat encryption based on OpenSSL.

    Note

    For more information about ssl-ca and ssl-cert, see https://www.dest-unreach.org/socat/doc/socat-openssltunnel.html. The ssl-ca is essentially a self-signed certificate in that example, and ssl-cert is the PEM file generated after concatenation of the key and the certificate generated earlier. The names of options were chosen to be compatible with socat parameter names as well as with MySQL\u2019s SSL authentication. For testing you can also download certificates from launchpad.

    Note

    Irrespective of what is shown in the example, you can use the same .crt and .pem files on all nodes and it will work, since there is no server-client paradigm here, but rather a cluster with homogeneous nodes.

    "},{"location":"xtrabackup-sst.html#ssl-key","title":"ssl-key","text":"

    Example: ssl-key=/etc/ssl/keys/key.pem

    Used to specify the full path to the private key in PEM format for socat encryption based on OpenSSL.

    "},{"location":"xtrabackup-sst.html#encrypt","title":"encrypt","text":"Parameter Description Values: 0, 4 Default: 4 Match: Yes

    Enables SST encryption mode in Percona XtraBackup:

    Considering that you have all three necessary files:

    [sst]\nencrypt=4\nssl-ca=ca.pem\nssl-cert=server-cert.pem\nssl-key=server-key.pem\n

    For more information, see Encrypting PXC Traffic.

    "},{"location":"xtrabackup-sst.html#sockopt","title":"sockopt","text":"

    Used to specify key/value pairs of socket options, separated by commas, for example:

    [sst]\nsockopt=\"retry=2,interval=3\"\n

    The previous example causes socat to try to connect three times (initial attempt and two retries with a 3-second interval between attempts).

    This option only applies when socat is used (transferfmt=socat). For more information about socket options, see socat (1).

    Note

    You can also enable SSL based compression with sockopt. This can be used instead of the Percona XtraBackup compress option.

    "},{"location":"xtrabackup-sst.html#ncsockopt","title":"ncsockopt","text":"

    Used to specify socket options for the netcat transfer format (transferfmt=nc).

    "},{"location":"xtrabackup-sst.html#progress","title":"progress","text":"

    Values: 1, path/to/file

    Used to specify where to write SST progress. If set to 1, it writes to MySQL stderr. Alternatively, you can specify the full path to a file. If this is a FIFO, it needs to exist and be open on reader end before itself, otherwise wsrep_sst_xtrabackup will block indefinitely.

    Note

    Value of 0 is not valid.

    "},{"location":"xtrabackup-sst.html#rebuild","title":"rebuild","text":"Parameter Description Values: 0, 1 Default: 0

    Used to enable rebuilding of index on joiner node. This is independent of compaction, though compaction enables it. Rebuild of indexes may be used as an optimization.

    Note

    #1192834 affects this option.

    "},{"location":"xtrabackup-sst.html#time","title":"time","text":"Parameter Description Values: 0, 1 Default: 0

    Enabling this option instruments key stages of backup and restore in SST.

    "},{"location":"xtrabackup-sst.html#rlimit","title":"rlimit","text":"

    Example: rlimit=128k

    Used to set a a ratelimit in bytes. Add a suffix (k, m, g, t) to specify units. For example, 128k is 128 kilobytes. For more information, see pv(1).

    Note

    Rate is limited on donor node. The rationale behind this is to not allow SST to saturate the donor\u2019s regular cluster operations or to limit the rate for other purposes.

    "},{"location":"xtrabackup-sst.html#use_extra","title":"use_extra","text":"Parameter Description Values: 0, 1 Default: 0

    Used to force SST to use the thread pool\u2019s extra_port. Make sure that thread pool is enabled and the extra_port option is set in my.cnf before you enable this option.

    "},{"location":"xtrabackup-sst.html#cpat","title":"cpat","text":"

    Default: '.\\*\\\\.pem$\\\\|.\\*init\\\\.ok$\\\\|.\\*galera\\\\.cache$\\\\|.\\*sst_in_progress$\\\\|.\\*\\\\.sst$\\\\|.\\*gvwstate\\\\.dat$\\\\|.\\*grastate\\\\.dat$\\\\|.\\*\\\\.err$\\\\|.\\*\\\\.log$\\\\|.\\*RPM_UPGRADE_MARKER$\\\\|.\\*RPM_UPGRADE_HISTORY$'

    Used to define the files that need to be retained in the datadir before running SST, so that the state of the other node can be restored cleanly.

    For example:

    [sst]\ncpat='.*galera\\.cache$\\|.*sst_in_progress$\\|.*grastate\\.dat$\\|.*\\.err$\\|.*\\.log$\\|.*RPM_UPGRADE_MARKER$\\|.*RPM_UPGRADE_HISTORY$\\|.*\\.xyz$'\n

    Note

    This option can only be used when wsrep_sst_method is set to xtrabackup-v2 (which is the default value).

    "},{"location":"xtrabackup-sst.html#compressor","title":"compressor","text":"Parameter Description Default: not set (disabled) Example: compressor=\u2019zstd -T0 -2\u2019"},{"location":"xtrabackup-sst.html#decompressor","title":"decompressor","text":"Parameter Description Default: not set (disabled) Example: decompressor=\u2019zstd -T0 -dc\u2019

    Stream-based compression and decompression are performed on the stream, in contrast to performing decompression after streaming to disk, which involves additional I/O. The savings are considerable, up to half the I/O on the JOINER node.

    You can use any compression utility which works on stream: gzip, pigz, zstd, and others. The pigz or zstd options are multi-threaded. At a minimum, the compressor must be set on the DONOR and the decompressor on JOINER.

    You must install the related binaries, otherwise SST aborts.

    compressor=\u2019pigz\u2019 decompressor=\u2019pigz -dc\u2019

    compressor=\u2019gzip\u2019 decompressor=\u2019gzip -dc\u2019

    To revert to the XtraBackup-based compression, set compress under [xtrabackup]. You can define both the compressor and the decompressor, although you will be wasting CPU cycles.

    [xtrabackup]\ncompress\n\n-- compact has led to some crashes\n
    "},{"location":"xtrabackup-sst.html#inno-backup-opts","title":"inno-backup-opts","text":""},{"location":"xtrabackup-sst.html#inno-apply-opts","title":"inno-apply-opts","text":""},{"location":"xtrabackup-sst.html#inno-move-opts","title":"inno-move-opts","text":"Parameter Description Default: Empty Type: Quoted String

    This group of options is used to pass XtraBackup options for backup, apply, and move stages. The SST script doesn\u2019t alter, tweak, or optimize these options.

    Note

    Although these options are related to XtraBackup SST, they cannot be specified in my.cnf, because they are for passing innobackupex options.

    "},{"location":"xtrabackup-sst.html#sst-initial-timeout","title":"sst-initial-timeout","text":"Parameter Description Default: 100 Unit: seconds

    This option is used to configure initial timeout (in seconds) to receive the first packet via SST. This has been implemented, so that if the donor node fails somewhere in the process, the joiner node will not hang up and wait forever.

    By default, the joiner node will not wait for more than 100 seconds to get a donor node. The default should be sufficient, however, it is configurable, so you can set it appropriately for your cluster. To disable initial SST timeout, set sst-initial-timeout=0.

    Note

    If you are using wsrep_sst_donor, and you want the joiner node to strictly wait for donors listed in the variable and not fall back (that is, without a terminating comma at the end), and there is a possibility of all nodes in that variable to be unavailable, disable initial SST timeout or set it to a higher value (maximum threshold that you want the joiner node to wait). You can also disable this option (or set it to a higher value) if you believe all other nodes in the cluster can potentially become unavailable at any point in time (mostly in small clusters) or there is a high network latency or network disturbance (which can cause donor selection to take longer than 100 seconds).

    "},{"location":"xtrabackup-sst.html#sst-idle-timeout","title":"sst-idle-timeout","text":"Parameter Description Default: 120 Unit: seconds

    This option configures the time the SST operation waits on the joiner to receive more data. The size of the joiner\u2019s sst directory is checked for the amount of data received. For example, the directory has received 50MB of data. The operation rechecks the data size after the default value, 120 seconds, has elapsed. If the data size is still 50MB, this operation is aborted. If the data has increased, the operation continues.

    An example of setting the option:

    [sst]\nsst-idle-timeout=0\n
    "},{"location":"xtrabackup-sst.html#tmpdir","title":"tmpdir","text":"Parameter Description Default: Empty Unit: /path/to/tmp/dir

    This option specifies the location for storing the temporary file on a donor node where the transaction log is stored before streaming or copying it to a remote host.

    Note

    This option can be used on joiner node to specify non-default location to receive temporary SST files. This location must be large enough to hold the contents of the entire database. If tmpdir is empty then default location datadir/.sst will be used.

    The tmpdir option can be set in the following my.cnf groups:

    wsrep_debug

    Specifies whether additional debugging output for the database server error log should be enabled. Disabled by default.

    This option can be set in the following my.cnf groups:

    "},{"location":"xtrabackup-sst.html#encrypt_threads","title":"encrypt_threads","text":"Parameter Description Default: 4

    Specifies the number of threads that XtraBackup should use for encrypting data (when encrypt=1). The value is passed using the --encrypt-threads option in XtraBackup.

    This option affects only SST with XtraBackup and should be specified under the [sst] group.

    "},{"location":"xtrabackup-sst.html#backup_threads","title":"backup_threads","text":"Parameter Description Default: 4

    Specifies the number of threads that XtraBackup should use to create backups. See the --parallel option in XtraBackup.

    This option affects only SST with XtraBackup and should be specified under the [sst] group.

    "},{"location":"xtrabackup-sst.html#xtrabackup-sst-dependencies","title":"XtraBackup SST dependencies","text":"

    Each suppored version of Percona XtraDB Cluster is tested against a specific version of Percona XtraBackup:

    Other combinations are not guaranteed to work.

    The following are optional dependencies of Percona XtraDB Cluster introduced by wsrep_sst_xtrabackup-v2 (except for obvious and direct dependencies):

    "},{"location":"xtrabackup-sst.html#xtrabackup-based-encryption","title":"XtraBackup-based encryption","text":"

    Settings related to XtraBackup-based Encryption are no longer allowed in PXC 8.0 when used for SST. If it is detected that XtraBackup-based Encryption is enabled, PXC will produce an error.

    The XtraBackup-based Encryption is enabled when you specify any of the following options under [xtrabackup] in my.cnf:

    "},{"location":"xtrabackup-sst.html#memory-allocation","title":"Memory allocation","text":"

    The amount of memory for XtraBackup is defined by the --use-memory option. You can pass it using the inno-apply-opts option under [sst] as follows:

    [sst]\ninno-apply-opts=\"--use-memory=500M\"\n

    If it is not specified, the use-memory option under [xtrabackup] will be used:

    [xtrabackup]\nuse-memory=32M\n

    If neither of the above are specified, the size of the InnoDB memory buffer will be used:

    [mysqld]\ninnodb_buffer_pool_size=24M\n
    "},{"location":"xtrabackup-sst.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"xtradb-cluster-version-numbers.html","title":"Understand version numbers","text":"

    A version number identifies the product release. The product contains the latest Generally Available (GA) features at the time of that release.

    8.0.20 -11. 2 Base version Minor build Custom build

    Percona uses semantic version numbering, which follows the pattern of base version, minor build, and an optional custom build. Percona assigns unique, non-negative integers in increasing order for each minor build release. The version number combines the base Percona Server for MySQL version number, the minor build version, and the custom build version, if needed.

    The version numbers for Percona XtraDB Cluster 8.0.20-11.2 define the following information:

    "},{"location":"xtradb-cluster-version-numbers.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"yum.html","title":"Install Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS","text":"

    A list of the supported platforms by products and versions is available in Percona Software and Platform Lifecycle.

    We gather Telemetry data in the Percona packages and Docker images.

    You can install Percona XtraDB Cluster with the following methods:

    This documentation describes using the Percona Software repositories.

    "},{"location":"yum.html#prerequisites","title":"Prerequisites","text":"

    Installing Percona XtraDB Cluster requires that you either are logged in as a user with root privileges or can run commands with sudo.

    Percona XtraDB Cluster requires the specific ports for communication. Make sure that the following ports are available:

    For information on SELinux, see Enabling SELinux.

    "},{"location":"yum.html#install-from-percona-software-repository","title":"Install from Percona Software Repository","text":"

    For more information on the Percona Software repositories and configuring Percona Repositories with percona-release, see the Percona Software Repositories Documentation.

    Install on Red Hat 7Install on Red Hat 8 or later
    $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release enable-only pxc-80 release\n$ sudo percona-release enable tools release\n$ sudo yum install percona-xtradb-cluster\n
    $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release setup pxc-80\n$ sudo yum install percona-xtradb-cluster\n
    "},{"location":"yum.html#after-installation","title":"After installation","text":"

    After the installation, start the mysql service and find the temporary password using the grep command.

    $ sudo service mysql start\n$ sudo grep 'temporary password' /var/log/mysqld.log\n

    Use the temporary password to log into the server:

    $ mysql -u root -p\n

    Run an ALTER USER statement to change the temporary password, exit the client, and stop the service.

    mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass';\nmysql> exit\n$ sudo service mysql stop\n
    "},{"location":"yum.html#next-steps","title":"Next steps","text":"

    Configure the node according to the procedure described in Configuring Nodes for Write-Set Replication.

    "},{"location":"yum.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.29-21.html","title":"Percona XtraDB Cluster 8.0.29-21 (2022-09-12)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.29-21.html#release-highlights","title":"Release Highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.29-21 (2022-08-08) release notes.

    The improvements and bug fixes for MySQL 8.0.29, provided by Oracle, and included in Percona Server for MySQL are the following:

    The Performance Schema tracks if a query was processed on the PRIMARY engine, InnoDB, or a SECONDARY engine, HeatWave. An EXECUTION_ENGINE column, which indicates the engine used, was added to the Performance Schema statement event tables and the sys.processlist and the sys.x$processlist views.

    Added support for the IF NOT EXISTS option for the CREATE FUNCTION, CREATE PROCEDURE, and CREATE TRIGGER statements.

    Added support for ALTER TABLE \u2026 DROP COLUMN ALGORITHM=INSTANT.

    An anonymous user with the PROCESS privilege was unable to select processlist table rows.

    Find the full list of bug fixes and changes in the MySQL 8.0.29 Release Notes.

    Note

    Percona Server for MySQL has changed the default for the supported DDL column operations to ALGORITHM=INPLACE. This change fixes the corruption issue with the INSTANT ADD/DROP COLUMNS (find more details in PS-8292.

    In MySQL 8.0.29, the default setting for supported DDL operations is ALGORITHM=INSTANT. You can explicitly specify ALGORITHM=INSTANT in DDL column operations.

    "},{"location":"release-notes/8.0.29-21.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/8.0.29-21.html#packaging-notes","title":"Packaging Notes","text":"

    Debian 9 is no longer supported.

    "},{"location":"release-notes/8.0.29-21.html#useful-links","title":"Useful Links","text":""},{"location":"release-notes/8.0.29-21.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.30-22.html","title":"Percona XtraDB Cluster 8.0.30-22.md (2022-12-28)","text":"Release date December 28, 2022 Install instructions Install Percona XtraDB Cluster Download this version Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    For paid support, managed services or consulting services, contact Percona Sales.

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.30-22.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.30-22 (2022-11-21) release notes.

    Note

    The following Percona Server for MySQL 8.0.30 features are not supported in this version of Percona XtraDB Cluster:

    The features will be supported in the next version of Percona XtraDB Cluster.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.30 and included in Percona Server for MySQL are the following:

    Find the full list of bug fixes and changes in the MySQL 8.0.30 release notes.

    "},{"location":"release-notes/8.0.30-22.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.30-22.html#platform-support","title":"Platform support","text":""},{"location":"release-notes/8.0.30-22.html#useful-links","title":"Useful links","text":""},{"location":"release-notes/8.0.30-22.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.31-23.2.html","title":"Percona XtraDB Cluster 8.0.31-23.2 (2023-04-04)","text":"Release date April 04, 2023 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.31-23.2.html#release-highlights","title":"Release highlights","text":"

    This release of Percona XtraDB Cluster 8.0.31-23 includes the fix to the security vulnerability CVE-2022-25834 with PXB-2977.

    "},{"location":"release-notes/8.0.31-23.2.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.31-23.2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.31-23.html","title":"Percona XtraDB Cluster 8.0.31-23 (2023-03-14)","text":"Release date 2024-04-03 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.31-23.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.31-23 (2022-11-21) release notes.

    This release adds the following feature in tech preview:

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.31 and included in Percona Server for MySQL are the following:

    Find the full list of bug fixes and changes in the MySQL 8.0.31 Release Notes.

    "},{"location":"release-notes/8.0.31-23.html#new-features","title":"New Features","text":""},{"location":"release-notes/8.0.31-23.html#improvement","title":"Improvement","text":""},{"location":"release-notes/8.0.31-23.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.31-23.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.31-23.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.32-24.2.html","title":"Percona XtraDB Cluster 8.0.32-24.2 (2023-05-24)","text":"Release date May 24, 2023 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.32-24.2.html#release-highlights","title":"Release highlights","text":"

    This release of Percona XtraDB Cluster 8.0.32-24 includes the fix for PXC-4211.

    "},{"location":"release-notes/8.0.32-24.2.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.32-24.2.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.32-24.2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.32-24.html","title":"Percona XtraDB Cluster 8.0.32-24 (2023-04-18)","text":"Release date April 18, 2023 Install instructions Install Percona XtraDB Cluster

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.32-24.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.32-24 (2023-03-20) release notes.

    Percona decided to revert the following MySQL bug fix:

    The data and the GTIDs backed up by mysqldump were inconsistent when the options --single-transaction and --set-gtid-purged=ON were both used. It was because in between the transaction started by mysqldump and the fetching of GTID_EXECUTED, GTIDs on the server could have increased already. With this fixed, a FLUSH TABLES WITH READ LOCK is performed before the fetching of GTID_EXECUTED to ensure its value is consistent with the snapshot taken by mysqldump.

    The MySQL fix also added a requirement when using \u2013single-transaction and executing FLUSH TABLES WITH READ LOCK for the RELOAD privilege. (MySQL bug #109701, MySQL bug #105761)

    The Percona Server version of the mysqldump utility, in some modes, can be used with MySQL Server. This utility provides a temporary workaround for the \u201cadditional RELOAD privilege\u201d limitation introduced by Oracle MySQL Server 8.0.32.

    For more information, see the Percona Performance Blog A Workaround for the \u201cRELOAD/FLUSH_TABLES privilege required\u201d Problem When Using Oracle mysqldump 8.0.32.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.32 and included in Percona Server for MySQL are the following:

    "},{"location":"release-notes/8.0.32-24.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.32-24.html#useful-links","title":"Useful links","text":"

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.32-24.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.33-25.html","title":"Percona XtraDB Cluster 8.0.33-25 (2023-08-02)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.33-25.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.33-25 (2023-06-15) release notes.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.33 and included in Percona XtraDB Cluster are the following:

    The support for user-defined collations will be removed in a future releases of MySQL.

    Find the full list of bug fixes and changes in the MySQL 8.0.33 Release Notes.

    "},{"location":"release-notes/8.0.33-25.html#new-features","title":"New features","text":""},{"location":"release-notes/8.0.33-25.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.33-25.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.33-25.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.33-25.upd.html","title":"Percona XtraDB Cluster 8.0.33-25 Update (2023-08-25)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.33-25.upd.html#known-issues","title":"Known issues","text":"

    If you use Galera Arbitrator (garbd), we recommend that you do not upgrade to 8.0.33 because garbd-8.0.33 may cause synchronization issues and extensive usage of CPU resources.

    If you already upgraded to garbd-8.0.33, we recommended downgrading to garbd-8.0.32-24-2 by performing the following steps:

    "},{"location":"release-notes/8.0.33-25.upd.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now

    "},{"location":"release-notes/8.0.33-25.upd.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.34-26.html","title":"Percona XtraDB Cluster 8.0.34-26 (2023-11-01)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.34-26.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.34-26 (2023-09-26) release notes.

    Percona XtraDB Cluster implements telemetry that fills in the gaps in our understanding of how you use Percona XtraDB Cluster to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the Telemetry on Percona XtraDB Cluster document.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.34 and included in Percona XtraDB Cluster are the following:

    "},{"location":"release-notes/8.0.34-26.html#deprecations-and-removals","title":"Deprecations and removals","text":"

    Find the full list of bug fixes and changes in the MySQL 8.0.34 Release Notes.

    "},{"location":"release-notes/8.0.34-26.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.34-26.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.34-26.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.35-27.html","title":"Percona XtraDB Cluster 8.0.35-27 (2024-01-17)","text":"

    Get started with Quickstart Guide for Percona XtraDB Cluster.

    Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.35-27.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.35-27 (2023-12-27) release notes.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.35 and included in Percona XtraDB Cluster are the following:

    "},{"location":"release-notes/8.0.35-27.html#deprecations","title":"Deprecations","text":"

    A future release may remove deprecated variables and options. The usage of these deprecated items may cause a warning. We recommend migrating from deprecated variables and options as soon as possible.

    This release deprecates the following variables and options:

    Find the full list of bug fixes and changes in the MySQL 8.0.35 Release Notes.

    "},{"location":"release-notes/8.0.35-27.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.35-27.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.35-27.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/8.0.36-28.html","title":"Percona XtraDB Cluster 8.0.36-28 (2024-04-03)","text":"

    Get started with Quickstart Guide for Percona XtraDB Cluster.

    Percona XtraDB Cluster supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/8.0.36-28.html#release-highlights","title":"Release highlights","text":"

    Percona XtraDB Cluster is based on Percona Server for MySQL. Find a complete list of improvements and bug fixes in the Percona Server for MySQL 8.0.36-28 (2024-03-04) release notes.

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.36 and included in Percona XtraDB Cluster are the following:

    Find the complete list of bug fixes and changes in the MySQL 8.0.36 Release Notes.

    "},{"location":"release-notes/8.0.36-28.html#bug-fixes","title":"Bug fixes","text":""},{"location":"release-notes/8.0.36-28.html#useful-links","title":"Useful links","text":"

    Install Percona XtraDB Cluster

    The Percona XtraDB Cluster GitHub location

    Download product binaries, packages, and tarballs at Percona Product Downloads

    Contribute to the documentation

    For training, contact Percona Training - Start learning now.

    "},{"location":"release-notes/8.0.36-28.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html","title":"Percona XtraDB Cluster 8.0.18-9.3","text":"

    Percona XtraDB Cluster 8.0.18-9.3 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.18-9 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#known-issues","title":"Known Issues","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html","title":"Percona XtraDB Cluster 8.0.19-10","text":"

    Percona XtraDB Cluster 8.0.19-10 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.19-10 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#known-issues","title":"Known Issues","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.19-10.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html","title":"Percona XtraDB Cluster 8.0.20-11.2","text":"

    This release fixes the security vulnerability CVE-2020-15180

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html","title":"Percona XtraDB Cluster 8.0.20-11.3","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html","title":"Percona XtraDB Cluster 8.0.20-11","text":"

    Percona XtraDB Cluster 8.0.20-11 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.20-11 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.20-11.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html","title":"Percona XtraDB Cluster 8.0.21-12.1","text":"

    Percona XtraDB Cluster 8.0.21-12.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.21-12 for more details on these changes.

    Implement an inconsistency voting policy. In the best case scenario, the node with the inconsistent data is aborted and the cluster continues to operate.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html","title":"Percona XtraDB Cluster 8.0.22-13.1","text":"

    Percona XtraDB Cluster 8.0.22-13.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.22-13 for more details on these changes.

    This release fixes security vulnerability CVE-2021-27928, a similar issue to CVE-2020-15180

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html","title":"Percona XtraDB Cluster 8.0.23-14.1","text":"

    Percona XtraDB Cluster 8.0.23-14.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.23-14 for more details on these changes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#improvements","title":"Improvements","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html","title":"Percona XtraDB Cluster 8.0.25-15.1","text":"

    Percona XtraDB Cluster 8.0.25-15.1 includes all of the features and bug fixes available in Percona Server for MySQL. See the corresponding release notes for Percona Server for MySQL 8.0.25-15 for more details on these changes.

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#release-highlights","title":"Release Highlights","text":"

    A Non-Blocking Operation method for online schema changes in Percona XtraDB Cluster. This mode is similar to the Total Order Isolation (TOI) mode, whereas a data definition language (DDL) statement (for example, ALTER) is executed on all nodes in sync. The difference is that in the NBO mode, the DDL statement acquires a metadata lock that locks the table or schema at a late stage of the operation, which is a more efficient locking strategy.

    Note that the NBO mode is a Tech Preview feature. We do not recommend that you use this mode in a production environment. For more information, see Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU).

    The notable changes and bug fixes introduced by Oracle MySQL include the following:

    For more information, see the MySQL 8.0.24 Release Notes and the MySQL 8.0.25 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#new-features","title":"New Features","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html","title":"Percona XtraDB Cluster 8.0.26-16.1","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#release-highlights","title":"Release Highlights","text":"

    The following are a number of the notable fixes for MySQL 8.0.26, provided by Oracle, and included in this release:

    In an upgrade from an earlier version to 8.0.26, enable the rpl_semi_sync_source plugin and the rpl_semi_sync_replica plugin after the upgrade has been completed. Enabling these plugins before all of the nodes are upgraded may cause data inconsistency between the nodes.

    For the source, the rpl_semi_sync_master plugin (seminsync_master.so library) is the old version and the rpl_semi_sync_source plugin(semisync_source.so library) is the new version.

    For the client, the rpl_semi_sync_slave plugin (semisync_slave.so library) is the old version and the rpl_semi_sync_replica plugin (semisync_replica.so library) is the new version

    For more information, see the MySQL 8.0.26 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#known-issues-unfixed-problems-that-you-should-be-aware-of","title":"Known Issues (unfixed problems that you should be aware of)","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html","title":"Percona XtraDB Cluster 8.0.27-18.1","text":"

    Date: April 11, 2022

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#release-highlights","title":"Release Highlights","text":"

    The following lists a number of the bug fixes for MySQL 8.0.27, provided by Oracle, and included in Percona Server for MySQL:

    Find the full list of bug fixes and changes in the MySQL 8.0.27 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#useful-links","title":"Useful Links","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html","title":"Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)","text":"

    Percona XtraDB Cluster (PXC) supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#release-highlights","title":"Release Highlights","text":"

    Improvements and bug fixes introduced by Oracle for MySQL 8.0.28 and included in Percona Server for MySQL are the following:

    Find the full list of bug fixes and changes in the MySQL 8.0.28 Release Notes.

    "},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#bugs-fixed","title":"Bugs Fixed","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#useful-links","title":"Useful Links","text":""},{"location":"release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "},{"location":"release-notes/release-notes_index.html","title":"Percona XtraDB Cluster 8.0 release notes index","text":""},{"location":"release-notes/release-notes_index.html#get-expert-help","title":"Get expert help","text":"

    If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.

    Community Forum Get a Percona Expert

    "}]} \ No newline at end of file diff --git a/8.0/sitemap.xml b/8.0/sitemap.xml index 0afa2a10..2b292065 100644 --- a/8.0/sitemap.xml +++ b/8.0/sitemap.xml @@ -2,412 +2,412 @@ https://docs.percona.com/percona-xtradb-cluster/8.0/index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/add-node.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/apparmor.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/apt.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/bootstrap.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/certification.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/compile.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/configure-cluster-rhel.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/configure-cluster-ubuntu.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/configure-nodes.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/copyright-and-licensing-information.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/crash-recovery.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/data-at-rest-encryption.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/docker.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/encrypt-traffic.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/failover.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/faq.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/garbd-howto.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/gcache-record-set-cache-difference.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/gcache-write-set-cache-encryption.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/get-started-cluster.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/glossary.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/haproxy-config.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/haproxy.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/high-availability.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/install-index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/intro.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/limitation.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/load-balance-proxysql.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/monitoring.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/nbo.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/online-schema-upgrade.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/performance-schema-instrumentation.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/proxysql-v2.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/quickstart-overview.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/restarting-nodes.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/secure-network.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/security-index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/selinux.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/set-up-3nodes-ec2.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/singlebox.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/state-snapshot-transfer.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/strict-mode.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/tarball.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/telemetry.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/threading-model.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/trademark-policy.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/upgrade-from-backup.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/upgrade-guide.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/verify-replication.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/virtual-sandbox.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/wsrep-files-index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/wsrep-provider-index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/wsrep-status-index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/wsrep-system-index.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/xtrabackup-sst.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/xtradb-cluster-version-numbers.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/yum.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.29-21.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.30-22.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.31-23.2.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.31-23.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.32-24.2.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.32-24.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.33-25.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.33-25.upd.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.34-26.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.35-27.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.36-28.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.18-9.3.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.19-10.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.20-11.2.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.20-11.3.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.20-11.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.22-13.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.23-14.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.25-15.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.26-16.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.27-18.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.28-19.1.html - 2024-04-04 + 2024-04-08 daily https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/release-notes_index.html - 2024-04-04 + 2024-04-08 daily \ No newline at end of file diff --git a/8.0/sitemap.xml.gz b/8.0/sitemap.xml.gz index 15772fc0..bb49d6f1 100644 Binary files a/8.0/sitemap.xml.gz and b/8.0/sitemap.xml.gz differ