Skip to main content
Version: 21.10

Upgrade Centreon HA from Centreon 20.10

This chapter describes how to upgrade your Centreon HA platform from version 20.10 to version 21.10.


Suspend cluster resources management​

In order to avoid a failover of the cluster during the update, it is necessary to unmanage all Centreon resources, as well as MariaDB.

pcs resource unmanage centreon
pcs resource unmanage ms_mysql
pcs resource unmanage php7-clone

Perform a backup​

Be sure that you have fully backed up your environment for the following servers:

  • Central server
  • Database server

Update the RPM signing key​

For security reasons, the keys used to sign Centreon RPMs are rotated regularly. The last change occurred on October 14, 2021. When upgrading from an older version, you need to go through the key rotation procedure, to remove the old key and install the new one.

Upgrade process​

Update the Centreon repository​

Run the following commands:

yum install -y

WARNING: to avoid broken dependencies, please refer to the documentation of the additional modules to update the Centreon Business Repositories.

Upgrade PHP​

Centreon 21.10 uses PHP in version 8.0.

First, you need to install the remi repository:

yum install -y yum-utils
yum install -y
yum install -y

Then, you need to enable the php 8.0 repository

yum-config-manager --enable remi-php80

Upgrade the Centreon solution​

Please, make sure all users are logged out from the Centreon web interface before starting the upgrade procedure.

Clean yum cache:

yum clean all --enablerepo=*

Then upgrade all the components with the following command:

yum remove centreon-ha
yum update centreon\*
yum install centreon-ha-web centreon-ha-common
mv /etc/centreon-ha/ /etc/centreon-ha/
mv /etc/centreon-ha/ /etc/centreon-ha/

The PHP timezone should be set. Run the command on both Central Server nodes:

echo "date.timezone = Europe/Paris" >> /etc/php.d/50-centreon.ini

Replace Europe/Paris by your time zone. You can find the list of supported time zones here.

WARNING the following commands must be executed on only one node of the cluster.

pcs resource delete php7 --force
pcs resource create "php" \
systemd:php-fpm \
meta target-role="started" \
op start interval="0s" timeout="30s" \
stop interval="0s" timeout="30s" \
monitor interval="5s" timeout="30s" \

Then to perform the WEB UI upgrade, please follow the official documentation Only on the active central node.

On the passive central node, move the "install" directory to avoid getting the "upgrade" screen in the WUI in the event of a further exchange of roles.

mv /usr/share/centreon/www/install /var/lib/centreon/installs/install-update-YYYY-MM-DD
sudo -u apache /usr/share/centreon/bin/console cache:clear

Removing cron jobs​

The RPM upgrade puts cron job back in place. Remove them to avoid concurrent executions:

rm /etc/cron.d/centreon
rm /etc/cron.d/centstorage
rm /etc/cron.d/centreon-auto-disco

Reset the permissions for centreon_central_sync resource​

The RPM upgrade puts the permissions back in place. Change it using these commands:

chmod 775 /var/log/centreon-engine/
mkdir /var/log/centreon-engine/archives
chown centreon-engine: /var/log/centreon-engine/archives
chmod 775 /var/log/centreon-engine/archives/
find /var/log/centreon-engine/ -type f -exec chmod 664 {} \;
find /usr/share/centreon/www/img/media -type d -exec chmod 775 {} \;
find /usr/share/centreon/www/img/media -type f \( ! -iname ".keep" ! -iname ".htaccess" \) -exec chmod 664 {} \;

Clean broker memory files​

WARNING perform these commands only the active central node. Before resuming the cluster resources management, to avoid broker issues, cleanup all the .memory., .unprocessed. or .queue. files:

systemctl stop cbd-sql
rm -rf /var/lib/centreon-broker/central-broker-master.memory*
rm -rf /var/lib/centreon-broker/central-broker-master.queue*
rm -rf /var/lib/centreon-broker/central-broker-master.unprocessed*
systemctl start cbd-sql

Then perform these commands on the passive central node:

rm -rf /var/lib/centreon-broker/central-broker-master.memory*
rm -rf /var/lib/centreon-broker/central-broker-master.queue*
rm -rf /var/lib/centreon-broker/central-broker-master.unprocessed*

Restart Centreon process​

Then to restart all the processes on the active central node:

systemctl restart cbd-sql cbd gorgoned centengine

And on the passive central node:

systemctl restart cbd

Upgrade of MariaDB Databases​

The MariaDB components can now be upgraded.

WARNING the following commands must be executed first on the active database node. Once the active database node is in 10.5, you can upgrade the passive database node.

  1. Stop the mariadb service:

    mysqladmin -p shutdown
  2. Uninstall current version:

    rpm --erase --nodeps --verbose MariaDB-server MariaDB-client MariaDB-shared MariaDB-compat MariaDB-common
  3. Install 10.5 version:

    yum install MariaDB-server-10.5\* MariaDB-client-10.5\* MariaDB-shared-10.5\* MariaDB-compat-10.5\* MariaDB-common-10.5\*
  4. Move the configuration file:

     mv /etc/my.cnf.d/server.cnf.rpmsave /etc/my.cnf.d/server.cnf
  5. Start the mariadb service:

    mysqld_safe &
  6. Launch the MariaDB upgrade process:

    mysql_upgrade -p

You may not use the option -p for mysqladmin and mysql_upgrade commands if you haven't secured your database server.

Refer to the official documentation if errors occur during this last step.

Configure MariaDB slave_parallel_mode​

Since MariaDB 10.5, the slave_parallel_mode is no longer set up as conservative. It's necessary to modify the mysql configuration by editing /etc/my.cnf.d/server.cnf:

On the 2 Central servers in HA 2 nodes On the 2 Database servers in HA 4 nodes.


Restart MariaDB Replication​

The replication thread will be down after the upgrade. To restart it Run this command on the secondary node:

mysqladmin -p shutdown

Verify that the mariadb service is now stopped, the following command must return nothing:

ps -ef | grep mariadb[d]

Once the service is stopped on the secondary node, you will run the synchronization script from the primary node:


the output of this command must display only OK results:


The expected output is:

Connection Status '@CENTRAL_MASTER_NAME@' [OK]
Connection Status '@CENTRAL_SLAVE_NAME@' [OK]
Slave Thread Status [OK]
Position Status [OK]

Resuming the cluster resources management​

Now that the update is finished, the resources can be managed again:

pcs resource manage centreon
pcs resource manage ms_mysql

It can happen that the replication thread is not running right after installation. Restarting the ms_mysql resource may fix it. You can also have some failed actions on the resources. Cleaning the resource may fix it.

pcs resource restart ms_mysql
pcs resource cleanup centreon
pcs resource cleanup ms_mysql

Check cluster's health​

You can monitor the cluster's resources in real time using the crm_mon command:

Stack: corosync
Current DC: @CENTRAL_SLAVE_NAME@ (version 1.1.20-5.el7_7.2-3c4c782f70) - partition with quorum
Last updated: Thu Feb 20 13:14:17 2020
Last change: Thu Feb 20 09:25:54 2020 by root via crm_attribute on @CENTRAL_MASTER_NAME@

2 nodes configured
14 resources configured


Active resources:

Master/Slave Set: ms_mysql-master [ms_mysql]
Clone Set: cbd_rrd-clone [cbd_rrd]
Resource Group: centreon
vip (ocf::heartbeat:IPaddr2): Started @CENTRAL_MASTER_NAME@
http (systemd:httpd24-httpd): Started @CENTRAL_MASTER_NAME@
gorgone (systemd:gorgoned): Started @CENTRAL_MASTER_NAME@
centreon_central_sync (systemd:centreon-central-sync): Started @CENTRAL_MASTER_NAME@
centreontrapd (systemd:centreontrapd): Started @CENTRAL_MASTER_NAME@
snmptrapd (systemd:snmptrapd): Started @CENTRAL_MASTER_NAME@
cbd_central_broker (systemd:cbd-sql): Started @CENTRAL_MASTER_NAME@
centengine (systemd:centengine): Started @CENTRAL_MASTER_NAME@
Clone Set: php-clone [php]

Verifying the platform stability​

You should now check that eveything works fine:

  • Access to the web UI menus.
  • Poller configuration generation + reload and restart method.
  • Schedule immediate check (Central + Pollers) and acknowledge, downtime etc.
  • Move resources or reboot active server and check again that everything is fine.