Pandora: Documentation en: MySQL Replica
Go back to Pandora FMS documentation index
Contents
1 MySQL Binary Replication model for HA
1.1 Introduction
This configuration is proposed to have a complete HA environment in Pandora FMS that is based on an active/passive model. The standard MySQL (not the MySQL cluster), allows to have a single MASTER (which allows INSERT/UPDATE operations) and several SLAVES, which are only allowed to read operations. This is used in various environments to have a distributed database model. In Pandora all the Read/Write operations are done against the same server of the DB, so this model cannot be used. Either way, replication is also used to have a "copy" of our main database, so if there is a bug, you can "Lift" the slave to be the master of the database and use it.
After failover, you will need to restart (manually, as this is a very delicate process), the Master system and transfer all data from the Slave to the Master again.
1.2 Initial Environment
192.168.10.202 (master) -> Master server
192.168.10.203 (slave) -> Slave server
192.168.10.206 (pandora) -> Pandora Server
1.3 Configuring Mysql Server
1.3.1 Cluster Installation and Configuration
The following packages must be installed on both nodes of the mysql cluster for the cluster to work properly. In this case, a Percona cluster has been chosen in version 5.7. (Realized installation on Centos 7, perform installation on both Master and Slave nodes).
a) Install Percona repository
To install the next Percona repository you need to install the following package as long as you have access to the internet from the machine, with the root user.
Master & Slave# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm Retrieving http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm Preparing... ########################################### [100%] 1:percona-release ########################################### [100%]
b) Install Percona Server v5.7 and all the necessary components for its correct functioning. Apart from the server, client and libraries, we will also install Percona xtrabackup that we will use to make the replication between both nodes:
Master & Slave# yum install Percona-Server-shared-compat-57 Percona-Server-client-57 Percona-Server-server-57 Percona-Server-shared-57 percona-xtrabackup
1.3.2 Master server setup and startup
Once the server is installed in both nodes, we proceed to configure the /etc/my.cnf file in the master node:
[mysqld] #Basic configuration parameters datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 #Optimization parameters (for a server with 4Gb RAM) max_allowed_packet = 64M innodb_buffer_pool_size = 256M innodb_lock_wait_timeout = 90 innodb_file_per_table innodb_flush_method = O_DIRECT innodb_log_file_size = 64M innodb_log_buffer_size = 16M innodb_io_capacity = 100 thread_cache_size = 8 max_connections = 100 key_buffer_size=4M read_buffer_size=128K read_rnd_buffer_size=128K sort_buffer_size=128K join_buffer_size=4M query_cache_type = 1 query_cache_size = 4M query_cache_limit = 8M sql_mode="" innodb_flush_log_at_trx_commit=1 # Parameters for the correct operation of binary replication. # Deleting Binary Logs expire_logs_days = 3 # Activation of binary logs for replication log-bin=mysql-bin sync_binlog=1 # Maximum size of binary logs. The smaller they are, the better the synchronization process will be and the less time it will take. there will be. max_binlog_size = 100M # Master server ID server_id=1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [client] # Here you can define the default password of the client to simplify the following steps user=root password=pandora
Once the master server is configured with this configuration we can start the mysql server in the master server.
Master#service mysql start
In Percona version 5.7 when you start mysql service for the first time, a temporary root password is created and can be found in the mysql /var/log/mysqld.log . Once we access the mysql server with this password we can modify it for another one that is more convenient.
In order to allow replication on the slave node, the appropriate grants must be added to the server.
Master|mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'slave' IDENTIFIED BY ‘password’;
1.3.3 Master to slave database replication via xtrabackup
First of all we have to back up the master. As we have just no information the size with which we will work for this point of replication will be quite small.
In order to perform the backup, we will use the xtrabackup tool in which we will have to indicate the backup parameter for its execution, the user and password of the database (if they are correctly added to the database we can omit this information) and the directory where we are going to store the backup using the parameter -target-dir (in the example /home/backup)
Master# xtrabackup --backup --user=root --password=password --target-dir=/home/backup/ xtrabackup: completed OK!
When completed, a directory will be created inside the indicated one with the creation date data, in the example /home/backup/2017-11-29_13-11-41
For the backup to be consistent, the next run must be launched:
Master#xtrabackup --user=root --password=password --prepare --target-dir=/home/backup/2017-11-29_13-11-41
When the backup is ready, the next step is to copy all the information to the Slave server (backup and /etc/my. cnf) To copy the backup to the slave server we will perform the following execution, knowing that the directory datadir (in the /var/lib/mysql example) of the slave server has to be empty.
Master# rsync -avpP -e ssh /home/backup/2017-11-29_13-11-41 Slave:/home/backup/ Slave#mv /home/backup/2017-11-29_13-11-41/* /var/lib/mysql/
We make sure that mysql directory permissions are correct:
Slave#chown –R mysql:mysql /var/lib/mysql/
1.3.4 Slave server configuration
The first thing is to configure the configuration file of mysql /etc/my. cnf. This will be a direct copy of the master server configuration file with the difference that in this case the parameter server_id=2.
[mysqld] #Basic configuration parameters datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 #Optimization parameters (for a server with 4Gb RAM) max_allowed_packet = 64M innodb_buffer_pool_size = 256M innodb_lock_wait_timeout = 90 innodb_file_per_table innodb_flush_method = O_DIRECT innodb_log_file_size = 64M innodb_log_buffer_size = 16M innodb_io_capacity = 100 thread_cache_size = 8 max_connections = 100 key_buffer_size=4M read_buffer_size=128K read_rnd_buffer_size=128K sort_buffer_size=128K join_buffer_size=4M query_cache_type = 1 query_cache_size = 4M query_cache_limit = 8M sql_mode="" innodb_flush_log_at_trx_commit=1 # Parameters for the correct operation of binary replication. # Deleting Binary Logs expire_logs_days = 3 # Activation of binary logs for replication log-bin=mysql-bin sync_binlog=1 # Maximum size of binary logs. The smaller they are, the better the synchronization process will be and the less lag there will be. max_binlog_size = 100M # Master server ID server_id=2 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [client] # Here you can define the default password of the client to simplify the following steps user=root password=pandora We boot the mysql server on the slave node: Slave#service mysql start
To start cluster replication the first thing is to see the contents of the xtrabackup_binlog_info file that can be found in the slave server's datadir directory.
Slave#cat /var/lib/mysql/xtrabackup_binlog_info Master-bin.000001 380
We connect to the mysql of the slave server and introduce the following queries that will allow us to indicate who the master server is and start the slave. The parameter read_only must be applied to 1 in the slave so that no additional information is accidentally added to the slave node.
Slave|mysql> 'CHANGE MASTER TO MASTER_HOST='master', MASTER_USER='repl',MASTER_PASSWORD='password',MASTER_LOG_FILE='TheMaster-bin.000001',MASTER_LOG_POS=380; Slave|mysql> START SLAVE; Slave|mysql>SET GLOBAL read_only = 1;
If replication has been successfully started, this information will appear on the slave server:
Slave|mysql> SHOW SLAVE STATUS \G ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Seconds_Behind_Master: 8 ...
Slave_IO_Running and Slave_SQL_Running show us the status of the cluster, while the Seconds_Behind_Master value shows the lag seconds between the information located in the master and slave.
1.4 Pandora DB Installation
Create a new one from the. sql installation files or launch the current one on the master node (Castor).
Log on to the master server:
mysql> create database pandora; mysql> use pandora; mysql> source /tmp/pandoradb.sql; mysql> source /tmp/pandoradb_data.sql;
1.4.1 Configuring the SQL server for use in Pandora's server
In both servers:
mysql> grant all privileges on pandora.* to [email protected] identified by 'pandora'; mysql> flush privileges;
Once applied these permissions we should be able to see the Pandora FMS console and start the Pandora FMS server when the license has been applied correctly.
On slave and master servers, check which processes are running with the following SQL command:
mysql> show processlist;
It should show something like:
+----+-------------+-----------+------+---------+------+---------------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info | +----+-------------+-----------+------+---------+------+---------------------------------------------------------- | 32 | root | localhost | NULL | Sleep | 72 | | NULL | | 36 | system user | | NULL | Connect | 906 | Waiting for master to send event | NULL | | 37 | system user | | NULL | Connect | 4 | Has read all relay log; waiting for the slave I/O thread to update it | NULL | | 39 | root | localhost | NULL | Query | 0 | NULL | show processlist | +----+-------------+-----------+------+---------+------+----------------------------------------------------------
1.5 Switchover
1.5.1 From Slave to Master
In this case the MASTER server is down and the SLAVE remains active but in slave mode and with write protection enabled. To make the transition from Slave to Master, the following SQL commands must be launched on the Slave server.
Slave|mysql> STOP SLAVE; Slave|mysql> RESET MASTER;
Your SLAVE server is now working as MASTER. The SLAVE does not use the replication log of the MASTER and the MASTER is now "out of sync", which means that if your PANDORA FMS points to the old master server, you will get obsolete information. This is one of the most problematic aspects and most of the problems stem from it.
The first "Switchover", which means that when the official MASTER falls down, and the official SLAVE becomes the NEW master, is not a problem, it is something completely automatic since the systems make queries against the SLAVE/server of the new master.
1.5.2 Switchover from the old Master to Slave
The problem is the "second" switchover, when you want the old master to become the official master again.
In this step, you will need to perform the entire process again to synchronize the entire HA model, this means:
1. Stop Pandora Server service.
2. Stop mysql service of the master node and delete all directory datadir.
3. Replicate the slave node database to the master node. (Point 1.3.1.3.).
4. Stop mysql service at slave node.
5. Start mysql master node mysql start
6. Start slave replication on slave node. (Point 1.3.1.4).
7. Check that it replicates correctly.