Centreon documentation

Centreon documentation

  • Documentation

›Centreon HA

Getting Started

  • Installation & first steps
  • Tutorials

    • Introduction
    • Create a custom view
    • Create a graphical view
    • Model your IT services
    • Analyze resources availability

Installation

  • Introduction
  • Prerequisites
  • Architectures
  • Download
  • Installation of a Central server

    • Using Centreon ISO
    • Using packages
    • Using virtual machines (VMs)
    • Using sources
  • Web And Post Installation
  • Installation of a Poller

    • Using Centreon ISO
    • Using packages

    Installation of a Remote server

    • Using Centreon ISO
    • Using packages
  • What is Centreon CEIP?

Monitoring

  • About Monitoring
  • Generic actions
  • Basic Objects

    • Macros
    • Commands
    • Time periods
    • Contacts
    • Hosts
    • Services
    • Meta Services
  • Templates
  • Plugin Packs
  • Monitoring Servers

    • Add a Poller to configuration
    • Add a Remote Server to configuration
    • Communications
    • Deploying a configuration
    • Advanced configuration
  • Groups & Categories
  • Passive Monitoring

    • Enable SNMP Traps
    • Create SNMP Traps definitions
    • Monitoring with SNMP Traps
    • Debug SNMP Traps management
    • Dynamic Service Management
  • Anomaly detection
  • Discovery

    • Introduction
    • Installation
    • Hosts Discovery
    • Services Discovery
    • Administration
  • Auto Remediation
  • Import/Export

Alerts & Notifications

  • Concepts
  • Events view (beta)
  • Event console
  • Manage Alerts
  • Notification

    • Concept
    • Configuration
    • Dependencies
    • Escalation
    • Flapping
    • To go further
  • Ticketing
  • Event Logs

Performance graphs

  • Charts managment
  • Graph template
  • Curves
  • Virtual metrics

Service Mapping

  • Introduction to Centreon BAM
  • Guide

    • Manage Business Activities
    • Monitor Business Activities
    • Report Business Activities
    • Settings
    • Widgets

    Administrate

    • Install Centreon BAM extension
    • Update the extension
    • Upgrade the extension
    • Migrate the extension
    • Install on a Remote Server

Graphical views

  • Introduction to Centreon MAP
  • Guide

    • Create a standard view
    • Create a geo view
    • Display views
    • Share a view

    Administrate

    • Install Centreon MAP extension
    • Update the extension
    • Upgrade the extension
    • Migrate the extension
    • Configure
    • Install on a Remote server
    • Procedures
    • Known issues
  • Release notes

Reporting

  • Introduction to Centreon MBI
  • Guide

    • Generate reports
    • Available reports
    • Widgets
    • Configure
    • Concepts
    • Report development

    Administrate

    • Install Centreon MBI extension
    • Update the extension
    • Upgrade the extension
    • Migrate the extension
    • Backup & restore

Administration

  • Secure your platform
  • Parameters

    • Centreon UI
    • Monitoring
    • Gorgone
    • LDAP
    • RRDTool
    • Debug
    • Data management
    • Medias
  • Access Control Lists
  • Extensions
  • Database partitioning
  • Centreon HA

    • Architectures
    • Installing a Centreon HA 2-nodes cluster
    • Installing a Centreon HA 4-nodes cluster
    • Monitoring Centreon-HA
    • Operating guide
    • Updating Centreon-HA platform
    • Upgrade from Centreon-Failover to Centreon-HA
    • Troubleshooting guide
  • Backup
  • Knowledge Base
  • Logging configuration changes
  • Platform statistics

Update, Upgrade & Migrate

    Update

    • Update a Centreon 20.04 platform

    Upgrade

    • Introduction to upgrade
    • Upgrade from Centreon 19.10
    • Upgrade from Centreon 19.04
    • Upgrade from Centreon 18.10
    • Upgrade from Centreon 3.4

    Migrate

    • Introduction
    • Migrate from a Centreon 3.4 platform
    • Nagios Reader to Centreon CLAPI
    • Migrate a platform with Poller Display module

Plugin Packs

  • Introduction to Plugin Packs
  • Tutorials

    • Collect OpenMetrics

    Applications

    • 3CX
    • Active Directory API
    • ActiveMQ JMX
    • Alyvix Server
    • Ansible
    • Ansible Tower
    • Antivirus ClamAV
    • Apache Server
    • Asterisk VoIP Server
    • Asterisk VoIP SNMP
    • Bind9 Web
    • BlueMind SSH
    • Cassandra
    • Cisco CMS
    • Cisco ISE
    • Cisco SSMS
    • Commvault CommServe Rest API
    • DRBD SSH
    • Dynatrace Rest API
    • Exchange 2010 API
    • Github
    • Github
    • Google Gsuite
    • Haproxy SNMP
    • Hibernate
    • IBM Tivoli Storage M
    • Microsoft IIS Server Restapi
    • Microsoft IIS Server NSClient API (Deprecated)
    • JBoss Server
    • Jenkins
    • Kafka
    • Kaspersky
    • Keepalived SNMP
    • Lync 2013
    • Maltem Insight Rest API
    • IP-Label datametrie API
    • IP-Label Newtest Rest API
    • McAfee Web Gateway
    • Microsoft Cluster Se
    • Microsoft IIS Server NRPE (Deprecated)
    • Microsoft SCCM
    • Microsoft WSUS
    • MS Active Directory
    • MS Biztalk
    • Graylog
    • MS Exchange 2K10
    • BlueMind
    • Mulesoft Anypoint
    • Netbackup Rest API
    • Netdata RestAPI
    • Nginx Server
    • Nginx Plus Restapi
    • OpenHeadend
    • OpenLDAP
    • OpenMetrics
    • OpenVPN OMI
    • OpenWeatherMap
    • Oracle VM Manager API
    • Pacemaker
    • Peoplesoft
    • Pfsense Fauxapi
    • PHP APC
    • PHP FPM
    • PVX
    • Quadstor
    • RabbitMQ RestAPI
    • Rapid Recovery SNMP
    • Redis Cli
    • Redis Restapi
    • Rubrik Rest API
    • Rudder
    • Salesforce
    • SAP HANA
    • SCOM Rest API
    • Gorgone Restapi
    • Selenium
    • Sendmail
    • Skype 2015
    • Solr
    • Squid SNMP
    • Symantec Netbackup
    • Tomcat JMX
    • Tomcat Webmanager
    • TrendMicro Iwsva
    • Varnish NRPE
    • Veeam
    • Veeam API
    • VerneMQ Restapi
    • VMware VCSA RestAPI
    • VTOM
    • Wazuh Rest API
    • Weblogic Server
    • ZIXI
    • Zookeeper

    Centreon

    • Centreon Central
    • Centreon Database
    • Centreon-HA
    • Centreon Map
    • Centreon Map4
    • Centreon MBI
    • Centreon Poller

    Cloud

    • Amazon API Gateway
    • Amazon CloudFront
    • Amazon CloudWatch
    • Amazon CloudWatch Logs
    • Amazon EBS
    • Amazon EC2
    • Amazon ElastiCache
    • Amazon EFS
    • Amazon Kinesis
    • Amazon RDS
    • Amazon S3
    • Amazon SES
    • Amazon SNS
    • Amazon SQS
    • AWS Billing
    • AWS ELB
    • AWS Health
    • AWS Lambda
    • AWS Transit Gateway
    • AWS VPN
    • Azure ExpressRoute
    • Azure Log Analytics
    • Azure Monitor
    • Azure Network Interface
    • Azure Recovery
    • Azure Resource
    • Azure SQL Database
    • Azure SQL Server
    • Azure Storage Account
    • Azure Virtual Machine
    • Azure Virtual Network
    • Azure VPN Gateway
    • cAdvisor
    • Cloud Foundry
    • Docker
    • IBM Softlayer
    • Kubernetes API
    • Kubernetes w/ Prometheus
    • Office 365
    • Office365 Exchange
    • Office365 OneDrive
    • Office365 SharePoint
    • Office365 Skype
    • Office365 Teams
    • OVH
    • Prometheus Server
    • Node Exporter
    • VMware VeloCloud

    Database

    • CouchDB Rest API
    • Elasticsearch
    • Elasticsearch (Deprecated)
    • Firebird
    • InfluxDB
    • Informix DB
    • Informix DB SNMP
    • Microsoft SQL Server
    • MongoDB
    • MySQL/MariaDB
    • Oracle Database
    • PostgreSQL DB
    • RRDtool
    • Sybase
    • Warp10 Sensision

    Hardware Server

    • Adder AIM SNMP
    • AEG ACM
    • Avocent ACS 6000
    • Axis Video
    • Cisco Collaboration Endpoint Rest API
    • Cisco UCS
    • Dell CMC
    • Dell iDRAC
    • Dell OpenManage
    • Eltek eNexus SNMP
    • Fujitsu Server SNMP
    • Hanwha camera SNMP
    • Hikvision camera SNMP
    • HMS Ewon SNMP
    • Timelinkmicro Tms6001
    • HP Blade Chassis
    • HP Ilo Rest API
    • HP Ilo XMLAPI
    • HP OneView Rest API
    • HP Proliant
    • Huawei HMM
    • Huawei iBMC
    • IBM BladeCenter
    • IBM IMM
    • Lenovo XCC SNMP
    • Cisco Telepresence System SNMP
    • Masterclock NTP100GP
    • Pexip Infinity ManagementAPI
    • Polycom GroupSeries SNMP
    • Polycom Trio Rest API
    • Safenet Keysecure
    • Sun MgmtCard
    • Sun Mseries
    • Sun SFxxK
    • Supermicro

    Network

    • 3com Network
    • A10 AX
    • Acme Packet
    • Adva FSP 150 SNMP
    • Adva FSP 3000 SNMP
    • Aerohive
    • Alcatel Omniswitch
    • Allied Telesis SNMP
    • Alvarion BreezeACCESS SNMP
    • Arista Switch
    • Arkoon
    • Aruba Instant SNMP
    • Aruba Standard
    • Atrica Routeur
    • Atto Fibrebridge SNMP
    • Barracuda Cloudgen SNMP
    • Bee Ware
    • BGP Protocol SNMP
    • Bluecoat generic
    • Brocade Switch
    • CheckPoint firewall
    • Cisco Apic
    • Cisco ASA
    • Cisco Call Manager
    • Cisco Callmanager SXML
    • Cisco ESA XMLAPI
    • Cisco Firepower Management Console Rest API
    • Cisco IronPort
    • Cisco Meraki Rest API
    • Cisco Meraki
    • Cisco Prime
    • Cisco Small Business
    • Cisco Standard
    • Cisco Standard SSH
    • Cisco VCS
    • Cisco Voice Gateway
    • Cisco Waas
    • Cisco WLC
    • Citrix Acceleration
    • Citrix Netscaler
    • Citrix SDX
    • Colubris SNMP
    • Cyberoam
    • D-Link DGS 3100
    • D-Link standard SNMP
    • Dell 6200
    • Dell 6200 SNMP
    • Dell N4000
    • Dell OS10 SNMP
    • Dell S-series
    • Dell Xseries
    • Digi Anywhere USB
    • Digi PortServers TS
    • Digi PortServers TS
    • Digi Sarian
    • Efficienti IP
    • Evertz FC7800
    • Extreme Network
    • F5 BigIP
    • Lenovo Flex System Switch
    • Fortinet FortiAuthenticator SNMP
    • Fortinet Fortigate
    • Fortinet Fortimanage
    • Freebox
    • FritzBox
    • Gorgy NTP Server
    • H3C Network
    • Hirschmann switch
    • HP Procurve
    • HP Standard Network
    • HP Virtual Connect
    • Huawei
    • Infoblox SNMP
    • Juniper EX Series
    • Juniper GGSN
    • Juniper ISG
    • Juniper M-Series
    • Juniper Mag
    • Juniper SA
    • Juniper SRX
    • Juniper SSG
    • Juniper Trapeze
    • Kemp Loadbalancer
    • Mikrotik SNMP
    • Mitel 3300ICP
    • Moxa Switch
    • Mrv Optiswitch
    • NetASQ Network
    • Netgear MSeries
    • Netscaler MPX 8000
    • Nokia TiMos
    • Nortel Standard
    • Omniswitch 6850
    • OneAccess Network
    • Oracle Infiniband
    • Palo Alto firewall SNMP
    • Palo Alto firewall SSH
    • Peplink Balance
    • Peplink Pepwave SNMP
    • Perle IDS SNMP
    • pfSense
    • Rad Airmux SNMP
    • Radware Alteon
    • Raisecom
    • RedBack Router
    • Riverbed Interceptor
    • Riverbed SteelHead
    • Ruckus
    • Ruckus Zonedirector
    • Ruckus ICX
    • Ruckus SCG
    • Ruckus Smartzone
    • Ruggedcom Network
    • Silverpeak
    • Sonicwall
    • Sophos ES
    • Stonesoft
    • Stormshield SNMP
    • Stormshield SSH
    • Teltonika SNMP
    • Traffic Director
    • Ucopia
    • Watchguard
    • Zyxel
    • Versa SNMP
    • Versa Director Restapi

    Operating System

    • AIX SNMP
    • Base Pack
    • FreeBSD SNMP
    • HP-UX
    • IBM AS400
    • Linux NRPE
    • Linux NRPE3
    • Linux SNMP
    • Linux SSH
    • Mac SNMP
    • Solaris SNMP
    • Windows NRPE
    • Windows NRPE 0.5
    • Windows NSClient API
    • Windows SNMP

    Printer

    • Printer standard

    Protocol

    • BGP Protocol
    • DHCP Server
    • DNS Service
    • FTP Server
    • Generic SNMP
    • HTTP Server
    • IMAP Server
    • JMX value
    • LDAP Server
    • Modbus
    • NTP Server
    • OSPF Protocol
    • POP Server
    • Protocol DHCP
    • Protocol SSH
    • Protocol TCP
    • Protocol UDP
    • Radius Service
    • SMTP Server
    • Telnet Scenario
    • TFTP Server
    • X509 Certificat

    Sensor

    • ABB CMS-700
    • AKCP Sensor
    • Geist p8000 sensor SNMP
    • Geist Sensor SNMP
    • HWg-STE Sensor
    • Jacarta Sensor
    • LM Sensors
    • Netbotz Sensor
    • Sensor IP
    • SensorGateway
    • Sensormetrix

    Storage

    • Adic Tape SNMP
    • Avid Isis
    • Buffalo TeraStation SNMP
    • Dell Compellent
    • Dell Compellent API
    • Dell Equallogic
    • Dell FluidFS
    • Dell MD3000
    • Dell Me4 Rest API
    • Dell ML6000
    • Dell TL2000
    • EMC Celerra
    • EMC Clariion
    • EMC Data Domain
    • EMC Isilon
    • EMC RecoveryPoint
    • EMC Symmetrix API
    • EMC Symmetrix NRPE
    • EMC Unisphere Rest API
    • EMC Vplex
    • EMC Xtremio
    • Exagrid
    • Fujitsu Eternus DX
    • Hitachi HCP SNMP
    • Hitachi NAS
    • Hitachi Standard
    • HP 3PAR 7000
    • HP 3PAR SSH
    • HP EVA
    • HP Lefthand
    • HP MSA2000
    • HP MSL
    • HP P2000
    • HP StoreOnce
    • HP StoreOnce SSH
    • IBM DS3000
    • IBM DS4000
    • IBM DS5000
    • IBM FlashSystem 900
    • IBM Storwize
    • IBM TS2900
    • IBM TS3100
    • IBM TS3200
    • IBM TS3500
    • Kaminario RestAPI
    • Lenovo S Series
    • NetApp Ontap OnCommand API
    • NetApp Ontap Rest API
    • NetApp Ontap SNMP
    • Netapp Santricity Restapi
    • Netgear Readynas SNMP
    • Nimble Storage
    • Nimble Storage Rest API
    • Oracle ZFS
    • Oracle ZS
    • Overland Neo
    • Panzura
    • Pure Storage RestAPI
    • Qnap
    • QSAN NAS
    • Quantum DXi Series
    • Quantum Scalar
    • Storagetek SL
    • Synology
    • Violin Memory 3000

    Toip Voip

    • Alcatel OXE
    • Asterisk VoIP Server
    • AudioCodes
    • Avaya AES SNMP
    • Avaya Media Gateway SNMP
    • Polycom DMA SNMP
    • Polycom HDX SNMP
    • Polycom RMX
    • Polycom RPRM SNMP
    • Sonus SBC
    • XiVO VoIP Server

    Ups Pdu

    • Alpha UPS SNMP
    • APC ATS
    • APC PDU
    • APC UPS
    • Clever PDU
    • CyberPower Systems PDU SNMP
    • Eaton ATS SNMP
    • Eaton PDU SNMP
    • Emerson PDU
    • HP UPS SNMP
    • MGE UPS System
    • Nitram UPS SNMP
    • Powerware UPS
    • Raritan PDU
    • Schleifenbauer Gateway SNMP
    • UPS Standard

    Virtualization

    • Hyper-V 2012
    • Nutanix
    • Proxmox VE
    • VMware ESX
    • VMware ESX WS-MAN
    • VMware vCenter
    • VMware vCenter v4
    • VMware vCenter v5
    • VMware vCenter v6
    • VMware VM

Integrations

    External

    • Accedian PVX Skylight
    • Maltem Insight Performances Rest API

    Notifications

    • Notify with Telegram bot

    Open Tickets

    • BMC Footprints
    • BMC Remedy
    • EasyVista
    • GLPI
    • GLPI RestAPI
    • iTop
    • IWS Isilog
    • Jira
    • Mail
    • OTRS RestAPI
    • Request Tracker RestAPI
    • Serena
    • ServiceNow

    Stream Connectors

    • BSM
    • Elasticsearch events
    • Elasticsearch metrics
    • NDO
    • HP OMI
    • PagerDuty Service integration
    • ServiceNow Event Manager
    • ServiceNow MID Server
    • Splunk Metrics
    • Splunk Events
    • Warp10

Mobile App.

  • Introduction

API

  • Introduction
  • Command Line API (v1)
  • Rest API (v1)
  • Rest API (v2)
  • Graphical views API (beta)

Developer resources

  • About developer resources
  • How to write a module
  • How to write a Stream Connector
  • How to translate Centreon
  • How to write a widget
  • Centreon Broker

    • Stream connectors
    • The BBDO protocol
    • Centreon Broker Event Mapping

Releases

  • Centreon Platform 20.04.0
  • Products lifecycle policy
  • Release notes by component

    • Centreon Core
    • Commercial Extensions
    • Open Source Extensions
Edit

Installing a Centreon HA 4-nodes cluster

Prerequisites

Understanding

Before applying this procedure, you should have a good knowledge of Linux OS, of Centreon, and of Pacemaker clustering tools in order to have a proper understanding of what is being done.

Installed Centreon platform

A Centreon HA cluster can only be installed on top of an operating Centreon platform. Before following this procedure, it is mandatory that this installation procedure has already been completed and that about 5GB free space have been spared on the LVM volume group that carries the MariaDB data directory (/var/lib/mysql mount point by default).

The output of the vgs command must look like (what must be payed attention on is the value under VFree):

  VG                    #PV #LV #SN Attr   VSize   VFree 
  centos_centreon-c1      1   5   0 wz--n- <31,00g <5,00g

WARNING: If this particular prerequisite is not effective, the databases synchronization method described further won't work.

Quorum Device

In order to keep the cluster safe from split-brain issues, a third server is mandatory to resolve the master's election in the event of a connection loss. The role of Quorum Device, can be held by a poller of the monitoring platform.

Defining hosts' names and addresses

In this procedure, we will refer to characteristics that are bound to change from a platform to another (such as IP addresses) by the following macros:

  • @CENTRAL_MASTER_IPADDR@: primary central server's IP address
  • @CENTRAL_MASTER_NAME@: primary central server's name (must be identical to hostname -s)
  • @CENTRAL_SLAVE_IPADDR@: secondary central server's IP address
  • @CENTRAL_SLAVE_NAME@: secondary central server's name (must be identical to hostname -s)
  • @DATABASE_MASTER_IPADDR@ : primary database server's IP address
  • @DATABASE_MASTER_NAME@ : primary database server's FQDN (must be identical to: hostname -s)
  • @DATABASE_SLAVE_IPADDR@ : primary database server's IP address
  • @DATABASE_SLAVE_NAME@ : secondary database server's FQDN (must be identical to: hostname -s)
  • @QDEVICE_IPADDR@: quorum device's IP address
  • @QDEVICE_NAME@: quorum device's name (must be identical to hostname -s)
  • @MARIADB_REPL_USER@: MariaDB replication login (default: centreon-repl)
  • @MARIADB_REPL_PASSWD@: MariaDB replication password
  • @MARIADB_CENTREON_USER@: MariaDB Centreon login (default: centreon)
  • @MARIADB_CENTREON_PASSWD@: MariaDB Centreon
  • @VIP_IPADDR@: virtual IP address of the cluster
  • @VIP_IFNAME@: network device carrying the cluster's VIP
  • @VIP_CIDR_NETMASK@: subnet mask length in bits (eg. 24)
  • @VIP_BROADCAST_IPADDR@: cluster's VIP broadcast address
  • @VIP_SQL_IPADDR@ : virtual IP address of the SQL cluster
  • @VIP_SQL_IFNAME@ : network device carrying the SQL cluster's VIP
  • @VIP_SQL_CIDR_NETMASK@ : SQL Cluster subnet mask length in bits (eg. 24)
  • @CENTREON_CLUSTER_PASSWD@ : hacluster user's password

Configuring centreon-broker

Link to cbd service

On a standard Centreon platform, cbd service manages two processes of centreon-broker-daemon (cbd):

  • central-broker-master: also called "central broker" or "SQL broker", redirects input-output from pollers to database, RRD broker, and so on.
  • central-rrd-master: also called "RRD broker", receives the stream from the central broker and updates the RRD binary data files (used to display graphs).

In the context of a Centreon HA cluster, both broker processes will be handled by a separate service, managed by the cluster.

  • central-broker-master known as the resource cbd_central_broker, linked to systemd service cbd-sql
  • central-rrd-master known as the clone resource cbd_rrd, linked to systemd cbd service, the standard broker service of Centreon.

So that everything goes well, you will have to unlink central-broker-master from cbd service by checking "No" for parameter "Link to cbd service" in Configuration > Pollers > Broker configuration > central-broker-master under the General tab.

Double output stream towards RRD

In the event of a cluster switch, you will expect the newly elected master central server to be able to display the metrics graphs, which requires all RRD data files to be up-to-date on both nodes. In order to fit this condition, you will double the central broker output stream and send it to both RRD broker processes. You can configure this in the same menu as above, this time under the Output tab. The parameters that must be changed are:

  • In the first "IPv4" output, replace "localhost" with @CENTRAL_MASTER_IPADDR@ in the "Host to connect to" field.
Output IPv4
Namecentreon-broker-master-rrd
Connection port5670
Host to connect to@CENTRAL_MASTER_IPADDR@
Buffering timeout0
Retry interval60
  • Add another "IPv4" output, similar to the first one, named "centreon-broker-slave-rrd" for example, directed towards @CENTRAL_SLAVE_IPADDR@.
Output IPv4
Namecentreon-broker-slave-rrd
Connection port5670
Host to connect to@CENTRAL_SLAVE_IPADDR@
Buffering timeout0
Retry interval60

Export the configuration

Once the previous actions have been done, you will have to export the central poller configuration files to apply these changes. Select the central poller, export the configuration with the "Move Export Files" option checked.

All the previous actions have to be applied either to both nodes, or to @CENTRAL_MASTER_NAME@ only and the exported files have to be copied to @CENTRAL_SLAVE_NAME@:

rsync -a /etc/centreon-broker/*json @CENTRAL_SLAVE_IPADDR@:/etc/centreon-broker/

Customizing poller reload command

You may ignore that, but the central broker daemon has to be reloaded every time you update your central poller's configuration, hence the "Centreon Broker reload command" parameter in Configuration > Pollers > Central.

As stated above, the centreon-broker processes will be divided into cbd (for RRD) and cbd-sql (for central broker) services. In this perspective, the service that needs to be reloaded is cbd-sql and not cbd any more. So you will have to set the "Centreon Broker reload command" parameter to service cbd-sql reload.

System settings

Before actually setting the cluster up, some system prerequisites have to be met.

Kernel network tuning

In order to improve the cluster reliability, and since Centreon HA only supports IPv4, we recommend to apply the following kernel settings all your Centreon servers (including pollers):

cat >> /etc/sysctl.conf <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv4.tcp_retries2 = 3
net.ipv4.tcp_keepalive_time = 200
net.ipv4.tcp_keepalive_probes = 2
net.ipv4.tcp_keepalive_intvl = 2
EOF
systemctl restart network

Name resolution

So that the Centreon HA cluster can stay in operation in the event of a DNS service breakdown, all the cluster nodes must know each other by name without DNS, using /etc/hosts.

cat >/etc/hosts <<"EOF"
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
@CENTRAL_MASTER_IPADDR@ @CENTRAL_MASTER_NAME@
@CENTRAL_SLAVE_IPADDR@ @CENTRAL_SLAVE_NAME@
@DATABASE_MASTER_IPADDR@ @DATABASE_MASTER_NAME@
@DATABASE_SLAVE_IPADDR@ @DATABASE_SLAVE_NAME@
@QDEVICE_IPADDR@ @QDEVICE_NAME@
EOF

From here, @CENTRAL_MASTER_NAME@ will be named the "primary server/node" and @CENTRAL_SLAVE_NAME@ the "secondary server/node". This designation is arbitrary, the two nodes will of course be interchangeable once the setup is done.

Installing system packages

Centreon offers a package named centreon-ha, which provides all the needed files and dependencies required by a Centreon cluster. These packages must be installed on every nodes (Except Quorum):

yum install epel-release
yum install centreon-ha

SSH keys exchange

SSH key-based authentication must be set so that files and commands can be sent from one node to another by UNIX accounts:

  • mysql
  • centreon

There are two ways of exchanging such keys:

  • By using ssh-copy-id command: needs to be able to log in to remote host using a password. It is however unsafe for such system accounts to have a password authentication available. If you choose this method, we advice you to revoke this password afterwards with these commands: passwd -d centreon and passwd -d mysql.
  • By manually copying the public key in ~/.ssh/authorized_keys. This method is safer.

The second method will be documented below.

centreon account

Switch to centreon's bash environment on both nodes:

su - centreon

Then run these commands on both nodes:

ssh-keygen -t ed25519 -a 100
cat ~/.ssh/id_ed25519.pub

Once done, copy the content of the public key file displayed by cat and paste it to ~/.ssh/authorized_keys (must be created) on the other node and apply the correct file permissions (sill as centreon user):

chmod 600 ~/.ssh/authorized_keys

The keys exchange must be validated by an initial connection from each node to the other in order to accept and register the peer node's SSH fingerprint (sill as centreon user):

ssh <peer node hostname>

Then exit the centreon session typing exit or Ctrl-D.

mysql account

For the mysql account, the procedure is slightly different since this user normally has neither home directory nor the ability to open a Shell session. These commands must be run on both nodes as well:

systemctl stop mysql
mkdir /home/mysql
chown mysql: /home/mysql
usermod -d /home/mysql mysql
usermod -s /bin/bash mysql
systemctl start mysql
su - mysql

Once in mysql's bash envinronment, run these commands on both nodes:

ssh-keygen -t ed25519 -a 100
cat ~/.ssh/id_ed25519.pub

Once done, copy the content of the public key file displayed by cat and paste it to ~/.ssh/authorized_keys (must be created) on the other node and apply the correct file permissions (sill as mysql user):

chmod 600 ~/.ssh/authorized_keys

The keys exchange must be validated by an initial connection from each node to the other in order to accept and register the peer node's SSH fingerprint (sill as mysql user):

ssh <peer node hostname>

Then exit the mysql session typing exit or Ctrl-D.

Configuring the MariaDB databases replication

A Master-Slave MariaDB cluster will be setup so that everything is synchronized in real-time.

**Note: unless otherwise stated, each of the following steps have to be run on both database nodes.

Configuring MariaDB

For both optimization and cluster reliability purposes, you need to add this tuning options to MariaDB configuration in the /etc/my.cnf.d/server.cnf file. By default, the [server] section of this file is empty. Paste these lines (some have to be modified) into this section:

[server]
server-id=1 # SET TO 1 FOR MASTER AND 2 FOR SLAVE
#read_only
log-bin=mysql-bin
binlog-do-db=centreon
binlog-do-db=centreon_storage
innodb_flush_log_at_trx_commit=1
sync_binlog=1
binlog_format=MIXED
slave_compressed_protocol=1
datadir=/var/lib/mysql
pid-file=/var/lib/mysql/mysql.pid

# Tuning standard Centreon
innodb_file_per_table=1
open_files_limit=32000
key_buffer_size=256M
sort_buffer_size=32M
join_buffer_size=4M
thread_cache_size=64
read_buffer_size=512K
read_rnd_buffer_size=256K
max_allowed_packet=64M
# Uncomment for 4 Go Ram
#innodb_buffer_pool_size=512M
# Uncomment for 8 Go Ram
#innodb_buffer_pool_size=1G
# MariaDB strict mode will be supported soon
sql_mode = 'NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'

Important: the value of server-id must be different from one server to the other. The values suggested in the comment 1 => Master et 2 => Slave are not mandatory by recommended.

Reminder: Don't forget to uncomment the right value for innodb_buffer_pool_size according to your own servers' memory size.

To apply the new configuration, you have to restart the database server:

systemctl restart mysql

Make sure that the restart went well:

systemctl status mysql

Warning: Other files in /etc/my.cnf.d/ such as centreon.cnf will be ignored from now. Any customization will have to be added to server.cnf.

Securing the database server

To avoid useless exposure of your databases, you should restrict access to it as much as possible. The mysql_secure_installation command will help you apply some basic security principles. You just need to run this command and let yourself be guided, choosing the recommended choice at every step. We suggest you choose a strong password.

mysql_secure_installation

Creating the centreon MariaDB account

First log in as root on both database servers (using the newly defined password):

mysql -p

Then paste on both sides the following SQL commands to the MariaDB prompt to create the application user (default: centreon). Of course, you will replace the macros first:

CREATE USER '@MARIADB_CENTREON_USER@'@'@DATABASE_SLAVE_IPADDR@' IDENTIFIED BY '@MARIADB_CENTREON_PASSWD@';
GRANT ALL PRIVILEGES ON centreon.* TO '@MARIADB_CENTREON_USER@'@'@DATABASE_SLAVE_IPADDR@';
GRANT ALL PRIVILEGES ON centreon_storage.* TO '@MARIADB_CENTREON_USER@'@'@DATABASE_SLAVE_IPADDR@';

CREATE USER '@MARIADB_CENTREON_USER@'@'@DATABASE_ASTER_IPADDR@' IDENTIFIED BY '@MARIADB_CENTREON_PASSWD@';
GRANT ALL PRIVILEGES ON centreon.* TO '@MARIADB_CENTREON_USER@'@'@DATABASE_MASTER_IPADDR@';
GRANT ALL PRIVILEGES ON centreon_storage.* TO '@MARIADB_CENTREON_USER@'@'@DATABASE_MASTER_IPADDR@';

Optionnaly, you can allow these privileges to be used from Central Cluster. It will make some administration scripts runnable from every nodes.

CREATE USER '@MARIADB_CENTREON_USER@'@'@CENTRAL_SLAVE_IPADDR@' IDENTIFIED BY '@MARIADB_CENTREON_PASSWD@';
GRANT ALL PRIVILEGES ON centreon.* TO '@MARIADB_CENTREON_USER@'@'@CENTRAL_SLAVE_IPADDR@';
GRANT ALL PRIVILEGES ON centreon_storage.* TO '@MARIADB_CENTREON_USER@'@'@CENTRAL_SLAVE_IPADDR@';

CREATE USER '@MARIADB_CENTREON_USER@'@'@CENTRAL_MASTER_IPADDR@' IDENTIFIED BY '@MARIADB_CENTREON_PASSWD@';
GRANT ALL PRIVILEGES ON centreon.* TO '@MARIADB_CENTREON_USER@'@'@CENTRAL_MASTER_IPADDR@';
GRANT ALL PRIVILEGES ON centreon_storage.* TO '@MARIADB_CENTREON_USER@'@'@CENTRAL_MASTER_IPADDR@';

When upgrading to centreon-ha from an existing Centreon platform or an OVA/OVF VM deployment, update '@MARIADB_CENTREON_USER@'@'localhost' password:

ALTER USER '@MARIADB_CENTREON_USER@'@'localhost' IDENTIFIED BY '@MARIADB_CENTREON_PASSWD@'; 

Creating the MariaDB replication account

Still in the same prompt, create the replication user (default: centreon-repl):

GRANT SHUTDOWN, PROCESS, RELOAD, SUPER, SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* 
TO '@MARIADB_REPL_USER@'@'localhost' IDENTIFIED BY '@MARIADB_REPL_PASSWD@';

GRANT SHUTDOWN, PROCESS, RELOAD, SUPER, SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* 
TO '@MARIADB_REPL_USER@'@'@DATABASE_SLAVE_IPADDR@' IDENTIFIED BY '@MARIADB_REPL_PASSWD@';

GRANT SHUTDOWN, PROCESS, RELOAD, SUPER, SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* 
TO '@MARIADB_REPL_USER@'@'@DATABASE_MASTER_IPADDR@' IDENTIFIED BY '@MARIADB_REPL_PASSWD@';

Optionnaly, you can allow these privileges to be used from Central Cluster. It will make some administration scripts runnable from every nodes.

GRANT SHUTDOWN, PROCESS, RELOAD, SUPER, SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* 
TO '@MARIADB_REPL_USER@'@'localhost' IDENTIFIED BY '@MARIADB_REPL_PASSWD@';

GRANT SHUTDOWN, PROCESS, RELOAD, SUPER, SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* 
TO '@MARIADB_REPL_USER@'@'@CENTRAL_SLAVE_IPADDR@' IDENTIFIED BY '@MARIADB_REPL_PASSWD@';

GRANT SHUTDOWN, PROCESS, RELOAD, SUPER, SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* 
TO '@MARIADB_REPL_USER@'@'@CENTRAL_MASTER_IPADDR@' IDENTIFIED BY '@MARIADB_REPL_PASSWD@';

Setting up the binary logs purge jobs

MariaDB binary logs must be purged on both nodes, but not at the same time, therefore the cron jobs definitions must be set on different times:

  • On the primary database node:
cat >/etc/cron.d/centreon-ha-mysql <<EOF
0 4 * * * root bash /usr/share/centreon-ha/bin/mysql-purge-logs.sh >> /var/log/centreon-ha/mysql-purge.log 2>&1
EOF
  • On the secondary database node:
cat >/etc/cron.d/centreon-ha-mysql <<EOF
30 4 * * * root bash /usr/share/centreon-ha/bin/mysql-purge-logs.sh >> /var/log/centreon-ha/mysql-purge.log 2>&1
EOF

Configuring the MariaDB scripts environment variables

The /etc/centreon-ha/mysql-resources.sh file declares environment variables that must be configured so that the Centreon HA scripts dedicated to MariaDB can work properly. These variables must be assigned the chosen values for the macros.

#!/bin/bash

###############################
# Database access credentials #
###############################

DBHOSTNAMEMASTER='@DATABASE_MASTER_NAME@'
DBHOSTNAMESLAVE='@DATABASE_SLAVE_NAME@'
DBREPLUSER='@MARIADB_REPL_USER@'
DBREPLPASSWORD='@MARIADB_REPL_PASSWD@'
DBROOTUSER='@MARIADB_REPL_USER@'
DBROOTPASSWORD='@MARIADB_REPL_PASSWD@'
CENTREON_DB='centreon'
CENTREON_STORAGE_DB='centreon_storage'

###############################

To make sure that all the previous steps have been successful, and that the correct names, logins and passwords have been entered in the configuration bash file, run this command:

/usr/share/centreon-ha/bin/mysql-check-status.sh

The expected output is:

Connection Status '@DATABASE_MASTER_NAME@' [OK]
Connection Status '@DATABASE_SLAVE_NAME@' [OK]
Slave Thread Status [KO]
Error reports:
    No slave (maybe because we cannot check a server).
Position Status [SKIP]
!Error reports:
    Skip because we can't identify a unique slave.

What matters here is that the first two connection tests are OK.

Switching to read-only mode

Now that everything is well configured, you will enable the read_only on both database servers by uncommenting (ie. removing the # at the beginning of the line) this instruction in the /etc/my.cnf.d/server.cnf file:

  • Primary node:
[server]
server-id=1
read_only
log-bin=mysql-bin
  • Secondary node:
[server]
server-id=2
read_only
log-bin=mysql-bin

Then apply this change by restarting MariaDB on both nodes:

systemctl restart mysql

Synchronizing the databases and enabling MariaDB replication

In the process of synchronizing the databases, you will first stop the secondary database process so that its data can be overwritten by the primary node's data.

Run this command on the secondary node:

systemctl stop mysql

It is important to make sure that MariaDB is completely shut down. You will run this command and check that it returns no output:

ps -ef | grep mysql[d]

In case one or more process are still alive, then run this other command (it will prompt for the MariaDB root password):

mysqladmin -p shutdown

Once the service is stopped on the secondary database node, you will run the synchronization script from the primary database node:

/usr/share/centreon-ha/bin/mysql-sync-bigdb.sh

This script will perform the following actions:

  • checking that MariaDB is stopped on the secondary node
  • stopping MariaDB on the primary node
  • mounting a LVM snapshot on the same volume group that bears the /var/lib/mysql (or whatever mount point holds the MariaDB data files)
  • starting MariaDB again on the primary node
  • recording the current position in the binary log
  • disabling the read_only mode on the primary node (this node will now be able to write into its database)
  • synchronizing/overwriting all the data files (except for the mysql system database)
  • unmounting the LVM snapshot
  • creating the replication thread that will keep both databases synchronized

This script's output is very verbose and you can't expect to understand everything, so to make sure it went well, focus on the last lines of its output, checking that it looks like:

Umount and Delete LVM snapshot
  Logical volume "dbbackupdatadir" successfully removed
Start MySQL Slave
Start Replication
Id  User    Host    db  Command Time    State   Info    Progress
[variable number of lines]

The important thing to check is that Start MySQL Slave and Start Replication are present and that no errors follow it.

In addition, the output of this command must display only OK results:

/usr/share/centreon-ha/bin/mysql-check-status.sh

The expected output is:

Connection Status '@DATABASE_MASTER_NAME@' [OK]
Connection Status '@DATABASE_SLAVE_NAME@' [OK]
Slave Thread Status [OK]
Position Status [OK]

Setting up the Centreon cluster

**Note: unless otherwise stated, each of the following steps have to be run on both central nodes (@CENTRAL_MASTER_NAME@ and @SLAVE_MASTER_NAME@).

Configuring the file synchronization service

The file synchronization centreon-central-sync service needs the IP address of the peer node to be entered in its configuration file (/etc/centreon-ha/centreon_central_sync.pm).

So on the @CENTRAL_MASTER_NAME@ server, the configuration file must look like:

our %centreon_central_sync_config = (
    peer_addr => "@CENTRAL_SLAVE_IPADDR@"
);
1;

And on the @CENTRAL_SLAVE_NAME@:

our %centreon_central_sync_config = (
    peer_addr => "@CENTRAL_MASTER_IPADDR@"
);
1;

Removing legacy Centreon cron jobs

In high-availability setup, gorgone daemon manages all cron-based scheduled tasks. To avoid cron on both nodes, remove all Centreon related cron in /etc/cron.d/ directory:

rm /etc/cron.d/centreon
rm /etc/cron.d/centstorage
rm /etc/cron.d/centreon-auto-disco

Permission modifications

Modifications have to be made on permissions of /var/log/centreon-engine and /tmp/centreon-autodisco directories.

In a clustered-setup, it's a requirement to get a file sync and discovery scheduled task fully functionnal.

  • Files synchronization
chmod 775 /var/log/centreon-engine/
mkdir /var/log/centreon-engine/archives
chown centreon-engine: /var/log/centreon-engine/archives
chmod 775 /var/log/centreon-engine/archives/
chmod 664 /var/log/centreon-engine/*
chmod 664 /var/log/centreon-engine/archives/*
  • Services discovery
mkdir /tmp/centreon-autodisco/
chown apache: /tmp/centreon-autodisco/
chmod 775 /tmp/centreon-autodisco/

Stopping and disabling the services

Informations : These operations must be applied to all nodes @CENTRAL_MASTER_NAME@, @SLAVE_MASTER_NAME@, @DATABASE_MASTER_NAME@ et @DATABASE_SLAVE_NAME@. All the Centreon suite is installed as a dependency of centreon-ha, but it will not be used on the database nodes and will not create any trouble.

Centreon's application services won't be launched at boot time anymore, they will be managed by the clustering tools. These services must therefore be stopped and disabled:

systemctl stop centengine snmptrapd centreontrapd gorgoned cbd httpd24-httpd centreon mysql
systemctl disable centengine snmptrapd centreontrapd gorgoned cbd httpd24-httpd centreon mysql

By default, the mysql service is enabled in both systemd and system V perspectives, so you'd rather make sure it is disabled:

chkconfig mysql off

Creating the cluster

Activating the clustering services

First we enable all the services and start pcsd on both central nodes:

systemctl start pcsd

Preparing the server that will hold the function of quorum device

You can use one of your pollers to play this role. It must be prepared with the commands below:

yum install pcs corosync-qnetd
systemctl start pcsd.service
systemctl enable pcsd.service
pcs qdevice setup model net --enable --start
pcs qdevice status net --full

Modify the parameter COROSYNC_QNETD_OPTIONS in the file /etc/sysconfig/corosync-qnetd to make sure the service will be listening the connections just on IPv4

COROSYNC_QNETD_OPTIONS="-4"

Authenticating to the cluster's members

For the sake of simplicity, the hacluster user will be assigned the same password on both central nodes and @QDEVICE_NAME@.

passwd hacluster

Now that both of the central nodes and the quorum device server are sharing the same password, you will run this command only on one of the nodes in order to authenticate on all the hosts taking part in the cluster.

pcs cluster auth \
    "@CENTRAL_MASTER_NAME@" \
    "@CENTRAL_SLAVE_NAME@" \
    "@DATABASE_MASTER_NAME@" \
    "@DATABASE_SLAVE_NAME@" \
    "@QDEVICE_NAME@" \
    -u "hacluster" \
    -p '@CENTREON_CLUSTER_PASSWD@' \
    --force

Creating the cluster

The following command creates the cluster. It must be run only on one of the nodes.

pcs cluster setup \
    --force \
    --name centreon_cluster \
    "@CENTRAL_MASTER_NAME@" \
    "@CENTRAL_SLAVE_NAME@" \
    "@DATABASE_MASTER_NAME@" \
    "@DATABASE_SLAVE_NAME@"

Then start the pacemaker service on both central and databases nodes:

systemctl enable pacemaker pcsd corosync
systemctl start pacemaker

And afterwards define these properties only on one node:

pcs property set symmetric-cluster="true"
pcs property set stonith-enabled="false"
pcs resource defaults resource-stickiness="100"

You can now follow the state of the cluster with the crm_mon command, which will display new resources as they appear.

Creating the Quorum Device

Run this command on one of the central nodes:

pcs quorum device add model net \
    host="@QDEVICE_NAME@" \
    algorithm="ffsplit"

Creating the MariaDB cluster resources

All commands within this section should be exectued on only on one Cluster node, the configuration will be spread automatically.

Primary & Secondary MySQL Processes

pcs resource create "ms_mysql" \
    ocf:heartbeat:mysql-centreon \
    config="/etc/my.cnf.d/server.cnf" \
    pid="/var/lib/mysql/mysql.pid" \
    datadir="/var/lib/mysql" \
    socket="/var/lib/mysql/mysql.sock" \
    replication_user="@MARIADB_REPL_USER@" \
    replication_passwd='@MARIADB_REPL_PASSWD@' \
    max_slave_lag="15" \
    evict_outdated_slaves="false" \
    binary="/usr/bin/mysqld_safe" \
    test_user="@MARIADB_REPL_USER@" \
    test_passwd="@MARIADB_REPL_PASSWD@" \
    test_table='centreon.host' \
    master

WARNING: the syntax of the following command depends on the Linux Distribution you are using.

CentOS7
RHEL
pcs resource meta ms_mysql-master \
master-node-max="1" \
clone_max="2" \
globally-unique="false" \
clone-node-max="1" \
notify="true"
pcs resource master ms_mysql \
master-node-max="1" \
clone_max="2" \
globally-unique="false" \
clone-node-max="1" \
notify="true"

MariaDB Virtual IP Address

pcs resource create vip_mysql \
    ocf:heartbeat:IPaddr2 \
    ip="@VIP_SQL_IPADDR@" \
    nic="@VIP_SQL_IFNAME@" \
    cidr_netmask="@VIP_SQL_CIDR_NETMASK@" \
    broadcast="@VIP_SQL_BROADCAST_IPADDR@" \
    flush_routes="true" \
    meta target-role="stopped" \
    op start interval="0s" timeout="20s" \
    stop interval="0s" timeout="20s" \
    monitor interval="10s" timeout="20s" \

Creating the clone resources

Some resources must be running on one only node at a time (centengine, gorgone, httpd, ...), but some others can be running on both (the RRD broker and PHP7). For the second kind, you will declare clone resources.

Warning: All the commands in this chapter have to be run only once on the central node of your choice.

PHP7 resource
pcs resource create "php7" \
    systemd:rh-php72-php-fpm \
    meta target-role="stopped" \
    op start interval="0s" timeout="30s" \
    stop interval="0s" timeout="30s" \
    monitor interval="5s" timeout="30s" \
    clone
RRD broker resource
pcs resource create "cbd_rrd" \
    systemd:cbd \
    meta target-role="stopped" \
    op start interval="0s" timeout="90s" \
    stop interval="0s" timeout="90s" \
    monitor interval="20s" timeout="30s" \
    clone

Creating the centreon resource group

Web VIP address
pcs resource create vip \
    ocf:heartbeat:IPaddr2 \
    ip="@VIP_IPADDR@" \
    nic="@VIP_IFNAME@" \
    cidr_netmask="@VIP_CIDR_NETMASK@" \
    broadcast="@VIP_BROADCAST_IPADDR@" \
    flush_routes="true" \
    meta target-role="stopped" \
    op start interval="0s" timeout="20s" \
    stop interval="0s" timeout="20s" \
    monitor interval="10s" timeout="20s" \
    --group centreon
Httpd service
pcs resource create http \
    systemd:httpd24-httpd \
    meta target-role="stopped" \
    op start interval="0s" timeout="40s" \
    stop interval="0s" timeout="40s" \
    monitor interval="5s" timeout="20s" \
    --group centreon \
    --force
Gorgone service
pcs resource create gorgone \
    systemd:gorgoned \
    meta target-role="stopped" \
    op start interval="0s" timeout="90s" \
    stop interval="0s" timeout="90s" \
    monitor interval="5s" timeout="20s" \
    --group centreon
centreon-central-sync service

This service only exists in the context of Centreon HA. It provides real time synchronization for configuration files, images, etc.

pcs resource create centreon_central_sync \
    systemd:centreon-central-sync \
    meta target-role="stopped" \
    op start interval="0s" timeout="90s" \
    stop interval="0s" timeout="90s" \
    monitor interval="5s" timeout="20s" \
    --group centreon
SQL Broker
pcs resource create cbd_central_broker \
    systemd:cbd-sql \
    meta target-role="stopped" \
    op start interval="0s" timeout="90s" \
    stop interval="0s" timeout="90s" \
    monitor interval="5s" timeout="30s" \
    --group centreon
Centengine service
pcs resource create centengine \
    systemd:centengine \
    meta multiple-active="stop_start" target-role="stopped" \
    op start interval="0s" timeout="90s" stop interval="0s" timeout="90s" \
    monitor interval="5s" timeout="30s" \
    --group centreon
Centreontrapd service
pcs resource create centreontrapd \
    systemd:centreontrapd \
    meta target-role="stopped" \
    op start interval="0s" timeout="30s" \
    stop interval="0s" timeout="30s" \
    monitor interval="5s" timeout="20s" \
    --group centreon
Snmptrapd service
pcs resource create snmptrapd \
    systemd:snmptrapd \
    meta target-role="stopped" \
    op start interval="0s" timeout="30s" \
    stop interval="0s" timeout="30s" \
    monitor interval="5s" timeout="20s" \
    --group centreon

Resource constraints

When using the 4 nodes architecture, you must define some specific Constraints to specify where Resources could run.

In order to glue the Primary Database role with the Virtual IP, define a mutual Constraint:

pcs constraint colocation add "vip_mysql" with master "ms_mysql-master"
pcs constraint colocation add master "ms_mysql-master" with "vip_mysql"

Create the Constraint that prevent Centreon Processes to run on Database nodes and vice-et-versa:

pcs constraint location centreon avoids @DATABASE_MASTER_NAME@=INFINITY @DATABASE_SLAVE_NAME@=INFINITY
pcs constraint location ms_mysql-master avoids @CENTRAL_MASTER_NAME@=INFINITY @CENTRAL_SLAVE_NAME@=INFINITY
pcs constraint location cbd_rrd-clone avoids @DATABASE_MASTER_NAME@=INFINITY @DATABASE_SLAVE_NAME@=INFINITY
pcs constraint location php7-clone avoids @DATABASE_MASTER_NAME@=INFINITY @DATABASE_SLAVE_NAME@=INFINITY

Activate the Cluster and check Resources operating state

Enable resources

pcs resource meta vip target-role="started"
pcs resource meta vip_mysql target-role="started"
pcs resource meta centreontrapd target-role="started"
pcs resource meta snmptrapd target-role="started"
pcs resource meta centengine target-role="started"
pcs resource meta cbd_central_broker target-role="started"
pcs resource meta gorgone target-role="started"
pcs resource meta centreon_central_sync target-role="started"
pcs resource meta http target-role="started"

Checking the resources' states

You can monitor the cluster's resources in real time using the crm_mon command:

[...]
4 nodes configured
21 resources configured

Online: [@CENTRAL_MASTER_NAME@ @CENTRAL_SLAVE_NAME@ @DATABASE_MASTER_NAME@ @DATABASE_SLAVE_NAME@]

Active resources:

 Master/Slave Set: ms_mysql-master [ms_mysql]
     Masters: [@DATABASE_MASTER_NAME@]
     Slaves: [@DATABASE_SLAVE_NAME@]
 Clone Set: php7-clone [php7]
     Started: [@CENTRAL_MASTER_NAME@ @CENTRAL_SLAVE@]
 Clone Set: cbd_rrd-clone [cbd_rrd]
     Started: [@CENTRAL_MASTER@ @CENTRAL_SLAVE_NAME@]
 Resource Group: centreon
     vip        (ocf::heartbeat:IPaddr2):       Started @CENTRAL_MASTER@
     http       (systemd:httpd24-httpd):        Started @CENTRAL_MASTER@
     gorgone    (systemd:gorgoned):     Started @CENTRAL_MASTER_NAME@
     centreon_central_sync      (systemd:centreon-central-sync):        Started @CENTRAL_MASTER_NAME@
     cbd_central_broker (systemd:cbd-sql):      Started @CENTRAL_MASTER_NAME@
     centengine (systemd:centengine):   Started @CENTRAL_MASTER_NAME@
     centreontrapd      (systemd:centreontrapd):        Started @CENTRAL_MASTER_NAME@
     snmptrapd  (systemd:snmptrapd):    Started @CENTRAL_MASTER_NAME@
vip_mysql       (ocf::heartbeat:IPaddr2):       Started @CENTRAL_MASTER_NAME@

Checking the database replication thread

The MariaDB replication state can be monitored at any time with the mysql-check-status.sh command:

/usr/share/centreon-ha/bin/mysql-check-status.sh

The expected output is:

Connection Status '@DATABASE_MASTER_NAME@' [OK]
Connection Status '@DATABASE_SLAVE_NAME@' [OK]
Slave Thread Status [OK]
Position Status [OK]

It can happen that the replication thread is not running right after installation. Restarting the ms_mysql resource may fix it.

pcs resource restart ms_mysql

Checking the constraints

Normally the two colocation constraints that have been created during the setup should be the only constraints the pcs constraint command displays:

Location Constraints:
  Resource: cbd_rrd-clone
    Disabled on: @DATABASE_MASTER_NAME@ (score:-INFINITY)
    Disabled on: @DATABASE_SLAVE_NAME@ (score:-INFINITY)
  Resource: centreon
    Disabled on: @DATABASE_MASTER_NAME@ (score:-INFINITY)
    Disabled on: @DATABASE_SLAVE_NAME@ (score:-INFINITY)
  Resource: ms_mysql-master
    Disabled on: @CENTRAL_MASTER_NAME@ (score:-INFINITY)
    Disabled on: @CENTRAL_SLAVE_NAME@ (score:-INFINITY)
  Resource: php7-clone
    Disabled on: @DATABASE_MASTER_NAME@ (score:-INFINITY)
    Disabled on: @DATABASE_SLAVE_NAME@ (score:-INFINITY)
Ordering Constraints:
Colocation Constraints:
  vip_mysql with ms_mysql-master (score:INFINITY) (rsc-role:Started) (with-rsc-role:Master)
  ms_mysql-master with vip_mysql (score:INFINITY) (rsc-role:Master) (with-rsc-role:Started)
Ticket Constraints:
← Installing a Centreon HA 2-nodes clusterMonitoring Centreon-HA →
  • Prerequisites
    • Understanding
    • Installed Centreon platform
    • Quorum Device
    • Defining hosts' names and addresses
    • Configuring centreon-broker
    • Customizing poller reload command
  • System settings
    • Kernel network tuning
    • Name resolution
    • Installing system packages
    • SSH keys exchange
  • Configuring the MariaDB databases replication
    • Configuring MariaDB
    • Securing the database server
    • Creating the centreon MariaDB account
    • Creating the MariaDB replication account
    • Setting up the binary logs purge jobs
    • Configuring the MariaDB scripts environment variables
    • Switching to read-only mode
    • Synchronizing the databases and enabling MariaDB replication
  • Setting up the Centreon cluster
    • Configuring the file synchronization service
    • Removing legacy Centreon cron jobs
    • Permission modifications
    • Stopping and disabling the services
    • Creating the cluster
    • Creating the MariaDB cluster resources
    • Creating the clone resources
    • Creating the centreon resource group
    • Activate the Cluster and check Resources operating state
    • Enable resources
Centreon documentation
Documentation
Getting StartedAPI ReferencesReleases
Resources
Centreon WebsiteBlogDownload
Follow us
centreon
Follow @Centreon
Copyright © 2005 - 2021 Centreon