WANdisco
 Navigation:  v | Release Notes | Install | Integration | Administration | Archive | Git MultiSite | Glossary |

5. Percona XtraDB Installation Guide

5.1 requirements

5.2 Installation Procedure

  1. Keep in mind that the Percona XtraDB is just a MySQL modified with the goal of using it as a multi-master database. It's just MySQL remastered.
  2. Everything that you need is also written in the PerconaXtraDBCluster-5.6.22-25.8.pdf for the cluster. Here is an outline of the process that was followed with a RedHat 6.6. For RedHat 7.X or higher you need to refer to the Latest Percona Documentation.
  3. For specific information you can go to page 31: 5.1 Installing Percona XtraDB Cluster on CentOS.
  4. The main commands here to install everything are:
    yum install socat   # note: you may need to add the EPEL repository before installing socat
    yum remove mysql-libs
    yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
    yum install Percona-XtraDB-Cluster-full-56	
    
  5. Create a my.cnf for Node1, the first bootstrapping node of the cluster. You will need to know the IP addresses of the 3 nodes. You must put your IPs into this configuration.
    [mysqld]
    datadir=/var/lib/mysql
    user=mysql
    #######################
    ####### PERCONA #######
    #######################
    # Path to Galera library
    wsrep_provider=/usr/lib64/libgalera_smm.so
    # Cluster connection URL contains the IPs of node#1, node#2 and node#3
    wsrep_cluster_address=gcomm://10.8.6.112,10.8.6.114,10.8.6.116
    # In order for Galera to work correctly binlog format should be ROW
    binlog_format=ROW
    # MyISAM storage engine has only experimental support
    default_storage_engine=InnoDB
    # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
    innodb_autoinc_lock_mode=2
    # Node #1 address
    wsrep_node_address=10.8.6.112
    # SST method
    wsrep_sst_method=xtrabackup-v2
    # Cluster name
    wsrep_cluster_name=my_centos_cluster
    # Authentication for SST method
    wsrep_sst_auth="sstuser:s3cret"    
    
  6. Bootstrap node 1 running this: (NOTE: with RedHat 7 you need another command. See attached pdf or the Percona web site)
    # /etc/init.d/mysql bootstrap-pxc
  7. Check the status of the server in mysql
    mysql> show status like 'wsrep%';
    and check that the service is ON.
  8. Create a specific user in MySQL to be used by the Percona replication:
    mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret';
    mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
    mysql> flush privileges;    
       
    
  9. Now let's create /etc/my.cnf on Node 2.
    
    [mysqld]
    datadir=/var/lib/mysql
    user=mysql
    #######################
    ####### PERCONA #######
    #######################
    # Path to Galera library
    wsrep_provider=/usr/lib64/libgalera_smm.so
    # Cluster connection URL contains the IPs of node#1, node#2 and node#3
    wsrep_provider_options="gcache.size=512M"
    # This value is used to determine the amount of transactions which a downed Percona node can catch up on using IST
    when rejoining the cluster. 
    wsrep_cluster_address=gcomm://10.8.6.112,10.8.6.114,10.8.6.116
    # In order for Galera to work correctly binlog format should be ROW
    binlog_format=ROW
    # MyISAM storage engine has only experimental support
    default_storage_engine=InnoDB
    # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
    innodb_autoinc_lock_mode=2
    # Node #2 address
    wsrep_node_address=10.8.6.114
    # SST method
    wsrep_sst_method=xtrabackup-v2
    # Cluster name
    wsrep_cluster_name=my_centos_cluster
    # Authentication for SST method
    wsrep_sst_auth="sstuser:s3cret"
  10. Start the cluster on Node2:
    root@dger02 ~]# /etc/init.d/mysql start
    Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher
    ... SUCCESS!     
    
  11. Create /etc/my.cnf on node 3 as above but change the IP for that node!
  12. Start node 3:
    [root@dger03 ~]# /etc/init.d/mysql start
    Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher
    ... SUCCESS!
  13. Test that the cluster is working and ANY database is replicating (Note: database mysql will not replicate directly cause it's on the MyISAM table engine; but DDL will be replicated)
    
    On node 3 or any:
    mysql> create database perconatest;
    Query OK, 1 row affected (0.38 sec)
    mysql> use perconatest;
    Database changed
    mysql> create table a(c int primary key not null auto_increment,a varchar(200));
    Query OK, 0 rows affected (<0.23 sec)
    mysql> insert into a values(NULL,'ciccio');
    Query OK, 1 row affected (0.22 sec)
    mysql> select * from a;
    +---+--------+
    | c | a      |
    +---+--------+
    | 3 | ciccio |
    +---+--------+
    1 row in set (0.00 sec)
    mysql> insert into a values(NULL,'ciccio2');
    Query OK, 1 row affected (0.31 sec)
    mysql> select * from a;
    +---+---------+
    | c | a       |
    +---+---------+
    | 3 | ciccio  |
    | 6 | ciccio2 |
    +---+---------+
    2 rows in set (0.00 sec)
    mysql>
    
    THEN ON NODE 1 , for example, check that the table is there:
    mysql> select * from a;
    +---+---------+
    | c | a       |
    +---+---------+
    | 3 | ciccio  |
    | 6 | ciccio2 |
    +---+---------+
    2 rows in set (0.00 sec)    
    

Important Tips

5.3 Percona Database Configuration

These steps configure the database section of the gerrit config file and must be followed once you have completed the installation of the Percona XtraDB cluster with Gerrit.

Procedure

When installing Gerrit with Percona XTRADB using an 'n-nodes' configuration, you need to:

  1. Create the reviewdb database only on one node (the other nodes will replicate this).
  2. Install vanilla gerrit on that node or on a node that connects to that database node.
  3. Proceed with the standard installation of GerritMS.
  4. Usually in a GerritMS-Percona configuration, each Gerrit node connects to an individual Percona XtraDB node, sitting maybe on the same host as Gerrit. So in the gerrit.config property file, in the dabase section, you will find localhost as the hostname to connect to.
  5. Then, if you want, you can maximise the database access speed from Gerrit to Percona XtraDB by using connection pooling. For this you need to:
    • edit the etc/gerrit.config file and
    • add or replace this piece of configuration in the database section:
      
      [database]
      type = mysql
      hostname = localhost
      database = reviewdb
      username = gerrit
      connectionPool = true
      poolLimit = 100
      poolMinIdle = 50
      	
      Depending on the load of the machine you can raise or lower the poolLimit or the poolMinIdle properties. Just keep in mind that, since, as usual, the default max number of connections for a MySQL server database is 151, you need to raise that number if you need to set the poolLimit to a value close or higher than 150. If you need to raise the max number of connection to MySQL (Percona) server, the you have to modify the my.cnf file and add something like:
      
      [mysqld]
      ...
      open_files_limit = 8192  # only if you need to raise the max number of connections to MySQL. Not needed otherwise
      max_connections = 1000   # only if you need to raise the max number of connections to MySQL. Not needed otherwise
      ...
          
      
  6. The last step is to modify the Git MultiSite configuration file (/opt/wandisco/git-multisite/replicator/properties/application.properties) for each node that will access a local master Percona database. Replace the following properties, or add them to the bottom of the file:
    gerrit.db.slavemode.sleepTime=0
    gerrit.db.mastermaster.retryOnDeadLocks=true

Note: Since Percona XtraDB cluster is based on MySQL server, the configuration is the same as the one for MySQL server.

5.4 Migrating from MySQL to Percona XtraDB Cluster

Requirements

Migration procedure

Follow these steps to complete the migration to Percona XtraDB Cluster.

  1. If you have not yet produced a dump from the old MySQL database, create it now:
    $ mysqldump -u gerrit -pXXXXXX reviewdb > reviewdb.dmp
  2. If you need to modify the dump file, then make an additional backup copy of the dump you have just produced.
  3. Uninstall MySQL and install Percona XtraDB if you need to do so (follow instructions in Percona XtraDB Installation Guide).
  4. Take a look at the produced dump file: If the dump has all the tables with the ENGINE=InnoDB format, then it's ok. Otherwise you need to change the dump file (or transform the tables and redo the dump) replacing the ENGINE=MyISAM with ENGINE=InnoDB.
  5. Since the Percona XtraDB cluster is just a modified version of MySQL, you will have just to:
    • Connect to a Percona Cluster node.
    • Create the new database and quit the client:
      [gerrit@dger01 ~]$ mysql -u root -pXXXXXXX
      Welcome to the MySQL monitor.  Commands end with ; or \g.
      Your MySQL connection id is 1172696
      Server version: 5.6.22-72.0-56 Percona XtraDB Cluster (GPL), Release rel72.0, Revision 978, WSREP version 25.8, wsrep_25.8.r4150
      Copyright (c) 2009-2014 Percona LLC and/or its affiliates
      Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
      Oracle is a registered trademark of Oracle Corporation and/or its
      affiliates. Other names may be trademarks of their respective
      owners.
      Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
      mysql> create database reviewdb;
      Query OK, 1 row affected (0.32 sec)
      mysql> quit
      Bye    
      
    • Import the old MySQL database into the new Percona XtraDB cluster:
      [gerrit@dger01 ~]$ mysql -u gerrit -pXXXXXX reviewdb < reviewdb.dmp
      Note that the "mysql" client here is the Percona modified version.
    • On the other Percona nodes you should already have the database fully imported at this stage, because Percona XtraDB is a replicated active-active cluster, i.e. you don't need to import the database on the other nodes.

Percona Configuration Options

The default options for most of the Percona settings are generally good. If required however, various Percona specific settings can be used in the my.cnf file to best configure Percona to the level of load required on the database.

Also worth noting is the tool Percona has provided that will provide a recommended configuration based on the responses to questions: https://tools.percona.com/wizard

wsrep_provider_options

Many options exist for this, a full list of which can be browsed here: https://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-provider-index.html

The following are some options customers may be particularly interested in:

wsrep_auto_increment_control

This is enabled by default and is what is behind the occasional generation of changeIDs/patchset numbers which can skip entries. For example, 1, 2, 4, 5, 8, etc. Currently the only tested configuration in a multi-master environment is with this left on. But further investigation might be worthwhile into whether this might be worth disabling, to better match the "vanilla Gerrit" experience.

wsrep_debug

Sends debug messages to the error_log. Useful when trying to diagnose a problem. Defaults to false.

wsrep_retry_autocommit

The number of times to retry a transaction in the event of a replication conflict. In most cases, a transaction can be safely retried automatically. This defaults to one currently, but we have noticed in GerritMS operation that a system under heavy load for a period of several days can still generate occassional database commit failures due to a deadlock caused by replication. Currently, code has been added to all Gerrit database commits to detect this error and retry, but this may be better configured here.

wsrep_slave_threads

The number of threads that can apply replication transactions in parallel. By default this is set to one, which is the safest option. If however performance becomes an issue, particularly around database replication, this can be used to increase throughput.

wsrep_sst_donor

The name of the preferred "donor" node in the event that the local node needs to recover by the SST mechanism. As the donor node database must enter read-only mode to allow the local node to catch up, it may be required that this isn't determined at random, and instead picks a specified node.

wsrep_provider_options

The default value is 128M when {{gcache.size}} is not set. This value is used to determine the amount of transactions which a downed Percona node can catch up on using IST when rejoining the cluster. If too many transactions have taken place between a node going down, and rejoining the cluster, then SST will be required to synchronise the node which is joining the cluster. This should kick in automatically, and its performance depends on the SST method selected in the my.cnf configuration.

More information on the web: