Pandora: Documentation en: Share /var/spool directory between several Pandora servers

From Pandora FMS Wiki
Jump to: navigation, search

Go back to Pandora FMS docs

1 Introduction

Pandora FMS dataserver uses the /var/spool/pandora/data_in directory, and all its contents to manage the information that recieves and send to the software agents.

That directory also needs to be accessible by the console of Pandora, so the instructions that it sends to the agents can reach them, being config files or collections.

If we have several servers with severs with several consoles, the default configuration, every console will be able to manage the agents of the server where it is located.

Now, let's suppose that we have several Pandora servers working in a common environment.

Nfs schema.png

The agents of each server will communicate with their assigned dataserver using the data_in folder. On a multiple dataserver architecture with a single console, we will unify the agents management using NFS or GlusterFS to share this pool of common information.


Sharing the pandora_console/attachment folder between the different Consoles is also recommended as it makes collection management easier.


1.1 Which method should I use?

Although both NFS and GlusterFS are able to share the required files, they are best recommended for different environments:

  • If the data is being stored in an external server which is independent to Pandora FMS servers, and we want to use the latter as clients, we recommend NFS.
  • If the data is being stored in the Pandora FMS servers or fault tolerant schema (at the software level) is required, we recommend GlusterFS.


It's mandatory to share data_in's conf, md5, collections and netflow folders for HA environments, and we recommend to share pandora_console/attachment folder as well. The data_in folder itself must not be shared, unless Tentacle server is configured for concurrent access to XML files.


2 NFS configuration

2.1 First steps

Install the nfs-utils package on all the systems that will share the directory by NFS:

yum install -y nfs-utils

2.2 Configuration of the NFS server

Template warning.png

It's very important that the NFS server is an separate server from Pandora FMS ones. If one of them were configured as NFS servers and any error prevented the client to connect, the shared files wouldn't be available, causing errors in Pandora FMS. If it's not possible to use a separate server, GlusterFS should be used instead.


Edit the file /etc/export adding the following:

/var/spool/pandora/data_in/conf [CLIENT_IP](rw,sync,no_root_squash,no_all_squash)
/var/spool/pandora/data_in/collections [CLIENT_IP](rw,sync,no_root_squash,no_all_squash)
/var/spool/pandora/data_in/md5 [CLIENT_IP](rw,sync,no_root_squash,no_all_squash)
/var/spool/pandora/data_in/netflow [CLIENT_IP](rw,sync,no_root_squash,no_all_squash)
/var/www/html/pandora_console/attachment [CLIENT_IP](rw,sync,no_root_squash,no_all_squash) 

Where [CLIENT_IP] stands for the IP address of the system with which the resource is going to be shared. For example:


In case that we have the firewall enabled in our system, open the required ports:

# CentOS
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --reload

Once done, we start the services:

# CentOS
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

Configure NFS to start with the system powers on:

systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap

To refresh any change in the setup of the /etc/export restart nfs-server

systemctl restart nfs-server

2.3 Configuration of the NFS clients

Note: If that system doesn't have apache installed (is not necessary to install it), add to /etc/passwd and /etc/group the user apache to avoid permission conflicts:

echo "apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin" >> /etc/passwd
echo "apache:x:48:" >> /etc/group

Check the folder permissions:

chown pandora:apache /var/spool/pandora/data_in
chmod 770 /var/spool/pandora/data_in

Check that we can mount successfully the remote folder:

mount -t nfs [NFS_SERVER_IP]:/var/spool/pandora/data_in/conf /var/spool/pandora/data_in/conf
mount -t nfs [NFS_SERVER_IP]:/var/spool/pandora/data_in/md5 /var/spool/pandora/data_in/md5
mount -t nfs [NFS_SERVER_IP]:/var/spool/pandora/data_in/collections /var/spool/pandora/data_in/collections
mount -t nfs [NFS_SERVER_IP]:/var/spool/pandora/data_in/netflow /var/spool/pandora/data_in/netflow
mount -t nfs [NFS_SERVER_IP]:/var/www/html/pandora_console/attachment /var/www/html/pandora_console/attachment

Where [NFS_SERVER_IP] stands for the IP address of the server that provides the NFS service. For example:

mount -t nfs /var/spool/pandora/data_in/conf
mount -t nfs /var/spool/pandora/data_in/md5
mount -t nfs /var/spool/pandora/data_in/collections
mount -t nfs /var/spool/pandora/data_in/netflow
mount -t nfs /var/www/html/pandora_console/attachment

If the previous command fails, check:

  • Firewall status.
  • If we are running as root.
  • If the directory where we want to make the mounting exists.

If everything is right untill now, configure the system to be mounted automatically if there is a reboot, editing the file /etc/fstab:

# Add the following lines to the configuration file /etc/fstab
[NFS_SERVER_IP]:/var/spool/pandora/data_in/conf    /var/spool/pandora/data_in/conf   nfs defaults 0 0
[NFS_SERVER_IP]:/var/spool/pandora/data_in/md5    /var/spool/pandora/data_in/md5   nfs defaults 0 0
[NFS_SERVER_IP]:/var/spool/pandora/data_in/collections    /var/spool/pandora/data_in/collections   nfs defaults 0 0
[NFS_SERVER_IP]:/var/spool/pandora/data_in/netflow    /var/spool/pandora/data_in/netflow    nfs defaults 0 0
[NFS_SERVER_IP]:/var/www/html/pandora_console/attachment    /var/www/html/pandora_console/attachment    nfs defaults 0 0

Where [NFS_SERVER_IP] stands for the IP direction of the server that provides the NFS service.

3 GlusterFS configuration

GlusterFS allows to share Pandora FMS' key directories between the servers and thus keep the data available if any of them becomes unreachable. Thanks to this system we will always have an active resource, and the data will be accessible even if not all servers are working properly.

3.1 Requirements

  • Selinux must be disabled or configured with the proper rules.
  • Firewall must be disabled or configured with the proper rules.
    • Port 24009/tcp must be open.
  • The /etc/hosts file must be configured with all names and IP addresses in all servers.
  • An additional disk with no partitioning must be created in all servers.

3.2 Package installation

Search for the available versions of GlusterFS:

yum search centos-release-gluster

Install the latest LTS version:

yum install centos-release-gluster37
yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-fuse

3.3 Creating XFS partitions (bricks)


We will use and as sample servers for this guide.


Create a new physical volume using /dev/xvdb disk:

pvcreate /dev/xvdb
   Physical volume “/dev/xvdb” successfully created

Create a volume group in /dev/xvdb:

vgcreate vg_gluster /dev/xvdb
   Volume group “vg_gluster”  successfully created

Create a volume brick1 for XFS bricks in both nodes of the cluster, setting the space to be assigned to them with -L paremeter:

lvcreate -L 5G -n brick1 vg_gluster
 Logical volume "brick1" created.

Alternatively you can set the space to be assigned using percentages:

lvcreate -l 100%FREE -n brick1 vg_gluster

Configure the filesystem as XFS:

mkfs.xfs /dev/vg_gluster/brick1

Create the mountpoint and mount the XFS brick:

mkdir -p /glusterfs/brick1
mount /dev/vg_gluster/brick1 /glusterfs/brick1

Add the following line to /etc/fstab file:

/dev/vg_gluster/brick1 /glusterfs/brick1 xfs defaults 0 0

Enable and start glusterfsd.service in both nodes:

systemctl enable glusterd.service --now

From the first GlusterFS node, connect to the second so it creates the Trusted Pool (Storage Cluster):

 gluster peer probe
   peer probe: success.

Verify the status:

gluster peer status
  Number of Peers: 1
  Uuid: e528dc23-689c-4306-89cd-1d21a2153057
   State: Peer in Cluster (Connected)

3.4 Creating the HA volume

The created XFS partition (/glusterfs/brick1) will be used now to create a replicated volume for both servers.

Create a subfolder in /glusterfs/brick1 mount point. It is needed for GlusterFS to operate.

mkdir /glusterfs/brick1/brick

Create a GlusterFS replicated volume:

Template warning.png

Only run this command in one of the nodes (in the example,


gluster volume create glustervol1 replica 2 transport tcp \
  volume create: glustervol1: success: please start the volume to access data
gluster volume start glustervol1
  volume start: glustervol1: success

Verify the GlusterFS volumes:

gluster volume info all
  Volume Name: glustervol1
  Type: Replicate
  Volume ID: 6953a675-f966-4ae5-b458-e210ba8ae463
  Status: Started
  Number of Bricks: 1 x 2 = 2
  Transport-type: tcp
  Options Reconfigured:
   performance.readdir-ahead: on

3.5 Mounting the volumes with GlusterFS client

Install the required packages:

yum install glusterfs glusterfs-fuse attr -y

Create a folder for Pandora FMS files:

mkdir /pandora_files/


The path /pandora_files/ is only used as an example, and any other folder can be used.


Mount the GlusterFS volumes with the client:

mount -t glusterfs /pandora_files/

Add the following line to /etc/fstab: /pandora_files glusterfs defaults,_netdev 0 0

Once the partition has been mounted, proceed to create the required folders in it:

cd /pandora_files/
mkdir collections md5 conf netflow attachment 

Copy the original folders from to the mounted path:

cp -rp /var/spool/pandora/data_in/conf /pandora_files/
cp -rp /var/spool/pandora/data_in/md5 /pandora_files/
cp -rp /var/spool/pandora/data_in/collections /pandora_files/
cp -rp /var/spool/pandora/data_in/netflow /pandora_files/
cp -rp /var/www/html/pandora_console/attachment /pandora_files/

Delete the old folders from both servers:

rm -rf /var/spool/pandora/data_in/conf
rm -rf /var/spool/pandora/data_in/md5
rm -rf /var/spool/pandora/data_in/collections
rm -rf /var/spool/pandora/data_in/netflow
rm -rf /var/www/html/pandora_console/attachment

And create the symlinks in both servers:

ln -s /pandora_files/conf /var/spool/pandora/data_in/
ln -s /pandora_files/md5 /var/spool/pandora/data_in/
ln -s /pandora_files/collections /var/spool/pandora/data_in/
ln -s /pandora_files/netflow /var/spool/pandora/data_in/
ln -s /pandora_files/attachment /var/www/html/pandora_console/


Now both servers will be sharing the key directories by now, so the process is complete. In case the shared volume had to be expanded, follow the steps shown in the next section of this guide.


3.6 Expanding a volume

It is possible to expand a GlusterFS volume with no downtime by increasing the number of bricks in a volume.

In order to do so a new disk must be created, following the same steps as before:

lvcreate -L 5G -n brick2 vg_gluster
  Logical volume "brick2" created.

Configure it as XFS:

mkfs.xfs /dev/vg_gluster/brick2

Create a new mount point and mount the new brick:

mkdir -p /bricks/brick2
mount /dev/vg_gluster/brick2 /bricks/brick2

Extend in /etc/fstab:

/dev/vg_gluster/brick2 /bricks/brick2 xfs defaults 0 0

Create the folder for the new brick:

mkdir /glusterfs/brick2/brick

Expand the volume:

gluster volume add-brick glustervol1 \

Verify the volume:

gluster volume info glustervol1
  Volume Name: glustervol1
  Type: Distributed-Replicate
  Volume ID: 6953a675-f966-4ae5-b458-e210ba8ae463
  Status: Started
  Number of Bricks: 2 x 2 = 4
  Transport-type: tcp

Check the disk usage before the rebalancing:

df -h | grep brick

Rebalance the volume:

gluster volume rebalance glustervol1 start

Check the volume rebalance:

gluster volume rebalance glustervol1 status

Check the disk usage again:

df -h | grep brick

Check the files in the bricks:

ls -l /glusterfs/brick*/brick/

4 Configuring Tentacle Server for NFS concurrent access

If you want to store the agents' XML files in the shared disk (instead of having each server handle their own locally), Tentacle must be configured on both servers so the XML files get distributed into separate folders. This will prevent concurrency problems when the Dataservers process the files in both Pandora FMS servers.

To that end, create two folders within the directory /var/spool/pandora/data_in:

mkdir /var/spool/pandora/data_in/xml_srv1
mkdir /var/spool/pandora/data_in/xml_srv2

Fix the permissions of both directories:

chmod pandora:apache /var/spool/pandora/data_in/xml_srv1
chmod pandora:apache /var/spool/pandora/data_in/xml_srv2

Template warning.png

In case you followed the GlusterFS guide, replace /var/spool/pandora/data_in/ with /pandora_files/ in the previous steps, and create the required symlinks:

ln -s /pandora_files/xml_srv1 /var/spool/pandora/data_in/
ln -s /pandora_files/xml_srv2 /var/spool/pandora/data_in/


Edit the TENTACLE_EXT_OPTS value in the file /etc/init.d/tentacle_serverd to set the XML file delivery folder:


In server number 1, it becomes:


In sever number 2, it becomes:


Finally, edit the configuration file of both Pandora FMS servers as follows:

# Pandora FMS server number 1
# incomingdir:  It defines directory where incoming data packets are stored
# You could set directory relative to base path or absolute, starting with /
incomingdir /var/spool/pandora/data_in/xml_srv1

# Pandora FMS server number 2
# incomingdir:  It defines directory where incoming data packets are stored
# You could set directory relative to base path or absolute, starting with /
incomingdir /var/spool/pandora/data_in/xml_srv2

After applying all the indicated changes, restart both the pandora_server service as well as the tentacle_serverd service in both servers.

Go back to Pandora FMS documentacion index