This blog post will explain how to update a Percona XtraDB Cluster 5.5 to the latest version, 5.6.  We’ll assume a cluster of three nodes, managed via ClusterControl.

We’ll be doing the upgrade by allowing for some downtime, bringing down the whole cluster before doing the update.  If you can’t have downtime it is also possible to do a rolling upgrade; this operation is a bit more delicate as you not only need to bring down, update, and restart one node at a time, but also modify the MySQL configuration to disable replication until all nodes have been updated.

1. Stop the cluster

First of all, make a full backup of your database with XtraBackup.

Then, check that all three nodes are in Synced state.  If they aren’t, wait for it.

Once all nodes are in sync, shut down the MySQL server in each node.  Make sure the daemon is stopped:

# service mysql status
# ps -ef | grep mysql

At this time the cluster is in a consistent state.  If your cluster runs in a virtualized environment (it shouldn’t, for performance reasons) you can also do now a VM snapshot of each node.  Better safe than sorry.

You are now ready to perform the upgrade.  Starting with the first node, do steps 2, 3, and 4.  Then repeat these steps for the second node, and finally for the third.

2. Install the upgrade

Before updating PXC is a good idea to update your whole system.  Then remove the previous PXC package, download the new one, and install it.

# yum update
# yum remove 'Percona*'
# yum install Percona-XtraDB-Cluster-56

3. Update the MySQL configuration

You must update your MySQL /etc/my.cnf configuration file so that it is compatible with v5.6.  Here’s the list of changes from Percona Server 5.5 to 5.6.

You can also run

# mysqld --user=mysql --wsrep-provider='none'

to test the validity of your configuration.  If there is any invalid setting, the MySQL daemon will print it onscreen and exit.  However, sometimes configuration incompatibilities will not pop up here.  If nodes refuse to start after you’ve completed the upgrade, check the logs for invalid settings in the configuration.

These are the main configuration changes we found in the transition from v5.5 to v5.6:

  • You need to set explicit_defaults_for_timestamp=1
  • engine-condition-pushdown is not a valid variable anymore; delete the line or comment it out
  • The recommended method for SST in v5.6 is XtraBackup V2.  Set wsrep_sst_method=xtrabackup-v2 in your config file

4. Update the MySQL tables

You need to fix your data so that it is compatible with MySQL 5.6. This is done via the script mysql_upgrade.

First of all, edit /etc/my.cnf and comment out the following WSREP options:

wsrep_provider=
wsrep_node_address=
wsrep_cluster_name=
wsrep_cluster_address=

Failure to do so will make mysql_upgrade fail without explanation.

Then, run

# mysqld --skip-grant-tables --user=mysql --wsrep-provider='none' &
# mysql_upgrade -uroot -p -h 127.0.0.1

and let it upgrade the database tables to v5.6.  Once mysql_upgrade completes successfully, stop the mysql daemon:

# service mysql stop

and remember to uncomment the WSREP lines in /etc/my.cnf .

5. Restart the cluster

Once you’ve successfully updated the PXC package, the configuration, and the database tables in all three nodes, you’re good to go and the cluster can be restarted.

Start by bootstrapping the cluster.  On the first node, type

# service mysql bootstrap-pxc

Then start the other nodes.  On the second and third node, type

# service mysql start

You can follow the cluster rebuild process from the ClusterControl UI.  A node is marked green when it has successfully joined the cluster and is in sync.  A node is marked orange either when it is joining the cluster and synchronizing (joiner), or when it is already in the cluster and performing replication for another node (donor).  Hence nodes will switch several times from green to orange.  At the end all three nodes and the cluster must be marked green: this means that your cluster is up and running.

Load balancing

If you use HAProxy to do load balancing over the nodes and you notice that after the upgrade some nodes don’t get any connection, you need to correct the HAProxy configuration of these nodes.

In fact, HAProxy/PXC checks that a node is eligible to load balancing by checking its WSREP status; it does this by connecting to the node’s database and interrogating the global variable wsrep_local_state.  This can be done by any MySQL user configured with the proper permissions e.g. « cmon », the user that operates on behalf of the CMON process to monitor the cluster and display the results on the ClusterControl interface.  However, by default it is user « clustercheckuser » with password « clustercheckpassword! » that does this.  If load balancing doesn’t work correctly after the upgrade, you need to create this user from the MySQL prompt:

mysql> GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';
mysql> FLUSH PRIVILEGES;

 

UPDATE 21/4/2015: Added details about running mysql_upgrade.

 

0 réponses

Laisser un commentaire

Participez-vous à la discussion?
N'hésitez pas à contribuer!

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.