Table of Contents
A service in Pandora FMS is a way to group IT resources based on their features.
A service could be an official website, a CRM system, a support application, or even printers. Services are logical groups which can include hosts, routers, switches, firewalls, CRMs, ERPs, websites and of course, different other services.
In Pandora FMS, services are represented as a group of monitored elements (Modules, Agents or other Services) whose individual status affects in a certain way the global performance of the service provided. To learn more, watch our video tutorial “Service monitoring in Pandora FMS”
Services under Pandora FMS
Basic monitoring in Pandora FMS consists of collecting metrics from different sources, representing them as monitors (modules). Service-based monitoring allows to group these modules, so that, by playing with certain ranges based on failure build-up, groups of different types of elements and their relationship in a larger and general service can be monitored.
In short, service monitoring allows to check the status of a global service. You will be able to know if our service is being provided normally (green), degraded (yellow) or if it is not being provided altogether (red).
Service monitoring is represented under three concepts: simple, by weight importance and chained by cascade events.
How simple mode works
In this mode it is only necessary to point out which elements are critical and which ones are not.
Only elements checked as critical will be taken into account to make calculations and only the
critical status od said elements will have value.
- When between 0 and 50% of the elements are in
criticalstatus, the service will go into
- When more than 50% of the elements go into
criticalstatus, the service will go into
- Router is a critical element.
- Printer is a non critical element.
- Apache Web Server is a critical element.
- Apache Server,
Result: The service is in
warning status since the printer is not critical, the router is in
critical mode and only represents 50% of the critical elements, Apache server is not in crtitical status and does not add value to the evaluation.
- Apache Server,
Result: Service in
critical status (the printer still adds no value).
- Apache Server,
Result: The status of the service would be normal, since no key element is in critical status (again the printer does not add any value).
How services work according to their weight
The need to monitor services as something “abstract” arises when faced with the following question:
What happens to an application if a non-critical element fails?
To solve all these doubts, in Pandora FMS there is the service monitoring feature that helps:
- Limit the number of received alerts. You will receive alerts about situations that compromise the reliability of the services you provide.
- Track the SLA compliance level.
- Simplify the monitoring display of your infrastructure.
To achieve this, monitor every element that could negatively affect your application.
Through Pandora FMS console, define a service tree in which to indicate both the elements that affect your application, as well as their impact degree.
All elements added to the service trees will correspond to information that is already being monitored, either in the form of modules, specific agents or other services.
To indicate the degree to which the status of each element affects the overall status, a weight sum system will be used, so that the most important ones (with more weight) will be more relevant to adjust the overall status of the whole service to an incorrect status before less important elements (with less weight).
You may monitor a web application balanced through a series of redundant elements. The infrastructure the application is based on is made in this example by the following elements:
- Two HA routers.
- Two HA switches.
- Twenty Web Apache® servers.
- Four WebLogic® application servers.
- One MySQL® cluster made by two storing nodes and two SQL processing nodes.
The goal is to find out whether the web application is working properly, that means the final appreciations by our clients is that the application receives, processes and returns en a peremptory time period the requests.
If one of the twenty Apache servers were offline, due to so much redundancy, would it be wise to warn or alert all the employees? What is the rule for alerting?
You may conclude Pandora FMS should only warn if a highly critical element fails (for example a router) or if serveral Apache servers are offline at the same time… but, how many of them? To solve this, weight values must be assigned to the list of previously described components:
Switches and routers
5 points to each one when they are in
critical and 3 points if they are in
1.2 points to each one in
warning status is not contemplated.
2 points to each one in
5 points to each one in
critical and 3 points in
|Tipo de elemento||Asignación de pesos|
When in a normal situation, the sum of those weights is zero, that is why in this example
warning status threshold must be higher than 4 and for
critical status higher than 6:
|Configuración del servicio|
- An Apache web server is offline (
criticalstatus): since everything else is in normal and adds 0 value, the total would be 1.2 since 1.2 < 4 (
warningthreshold), the service is still in OK status(
- A WEB server and a WebLogic one, both in
criticalstatus: the first one adds 1.2 points and the second 2.0 for a total of 3.2; however it is still lower than 4 so the service is still in OK status, no alerts or actions needed.
- Now two WEB servers and a WebLogic one are offline: 2 x 1,2 + 1 x 2 = 4,4; in this case it exceeded the warning threshold so it goes into
warningstatus; it is still working and it may not require any immediate technical action, but it is obvious there is a problem in the infrastructure.
- To the previous situation we add a router in
criticalstatus and it triggers a new situation: it adds 5 points to the weight sum and exceeds the criticity threshold set at 6; the service is in critical status, the service is not working and immediate technical action is required.
In this last situation, Pandora FMS will alert the corresponding working team (operators, technicians, etc.).
You may get more interesting information about service monitoring in Pandora FMS blog
A root service is that one that is not part of another service. This logic concept allows making monitoring smoother, reducing work queues.
In addition and based on that, when a service defined in a Pandora FMS node appears as a Metaconsole root service element, the Metaconsole server will be the one to evaluate it, updating the values stored in the node.
This provides a more efficient distributed logic, and allows to apply a Cascade protection system based on services.
Metaconsole service possibilities have also been extended, allowing to add other services, modules or agents as service elements. In previous versions, only node services could be added.
Creating a new Service
Pandora FMS server
The PredictionServer component must be enabled to be able to use these services.
It is necessary for the PredictionServer component to be working and for Pandora FMS Enterprise server version to be installed.
The services may represent:
- Full agents.
- Other Services.
Service values are calculated using the Prediction Server.
Once you have all the devices monitored. Add within each service all the modules, agents or sub-services that you need to monitor the service. For example, if you want to monitor the Online Store service, you need a module for content, a service that monitors the state of communications and so on.
To create a new service, click on Services at the Topology Maps menu.
A tree view containing all the available services will be shown.
To create a new service, click on the Create service and fill out the form.
Unique name to identify the service.
Service description, a long mandatory text. Said description will appear in the service map, the service table view and the service widget (instead of the name).
Group to which the service belongs, useful to organize it and to apply [[en:documentation:04_using:11_managing_and_administration#Profiles.2C_users.2C_groups_and_ACL|ACL]] restrictions.
Agent to store data
The service saves the data in some special data modules (in particular the prediction modules) and it is necessary to add an agent to be the container of said modules and the alarms (see the following steps).
Nota: Please bear in mind that the interval in which all the calculations of the service modules will be done will depend on the agent interval configured as container.
Mode in which the element weights will be calculated. It may have 2 values:
- Smart: The service's weights and elements will be calculated automatically based on established rules.
- Manual: The service's weights and elements will be indicated manually with fixed values.
- Critical: Weight threshold to declare the service as critical. In smart mode this value will be a percentage. We will explain later how the elements contribute to this value.
- Warning: Weight threshold to declare the service as in warning status. In smart mode, this value will be a percentage. We will explain later how the elements contribute to this value.
Unknown elements as critical
It allows you to indicate that elements in an unknown state contribute their weight as if they were a critical element.
The smart mode is only available from Pandora FMS version 7.0NG 748.
The automatic and simple modes of previous versions will become manual by applying the MR 40 in the version update.
It creates a direct link in the side menu and services will be able to be filters in the views based on this criteria.
It activates the silence mode of the service, so it will not generate alerts or events.
Cascade protection enabled
It activates cascade protection over the service elements. These will not generate alerts or events if they belong to a service (or sub-service) that is in a critical state.
Calculate continuous SLA
It activates the creation of SLA and SLA value modules for the current service. If disabled, the dynamically calculated SLA information will not be available, nor will the alerts on SLA compliance for this service. It is used for cases where the number of services required is so high that it can affect performance.
If this option is disabled, once the service has been created, the data history of these modules will be deleted, so information will be lost.
Time period to calculate the effective SLA of the service.
Service status threshold in OK to be considered a positive SLA during the period of time you have configured in the previous field.
In this section select templates that the service will have to launch the alert when the service goes into warning, critical, unknown status or when the service SLA is not met.
Once the form has been correctly filled in, it will have an empty service which must be filled in with elements as we will see below. In the service edition form, select the 'Configure elements' tab.
By clicking on Add element, a pop-up window with a form will appear. The form will be slightly different if the service is in smart mode or in manual mode.
Optional text that will be used to represent the element on the service map. If not indicated, the name of the module, agent or service (depending on the added element) will be used.
Drop-down list to choose whether the element will be a service, module or agent. In smart mode services you can also choose the dynamic type.
Intelligent agent search engine. Only visible if the element to create or edit is an agent or module type.
Deployable list with the modules of the agent previously chosen in the intelligent search engine. This control is only visible if an element for the module type service is edited or created.
Dropdown list of the services to create an element. Only visible if the element to be created or edited is a service element.
It should also be noted that the services that will appear in the drop-down list are those that are not the ancestors of the service. This is necessary to show a correct tree structure of dependency between services.
The following fields will only available for services in manual mode:
critical> Weight that the element will add to the service when in critical state.
warning> Weight that the element will add to the service when in warning state.
unknown> Weight that the element will add to the service when in unknown state.
normal> Weight that the element will add to the service when in normal state.
To calculate the status of a service, the weight of each of its elements will be added based on its status, and if it exceeds the thresholds established in the service for warning or critical, the status of the service will change to warning or critical accordingly.
In smart mode services, since no weights are defined for the elements, the way their status is calculated is as follows:
- Critical elements contribute their full percentage to the weight of the service. This means that if, for example, there are 4 elements in the service and only 1 of them is critical, that element will add 25% to the weight of the service. If instead of 4 elements there were 5, the critical element would add 20% to the weight of the service.
- Warning elements contribute half of their percentage to the weight of the service. This means that if for example a service has 4 elements and only 1 of them is in warning status, that element will add 12.5% to the weight of the service. If instead of 4 elements there were 5, the warning element would add 10% to the weight of the service.
The following fields will only be available for dynamic elements, in services in smart mode:
Matching object types
Drop-down list to choose whether the elements for which the dynamic rules will be evaluated and that will be part of the service will be agents or modules.
Filter by group
Rule to indicate the group the element must belong to to be part of the service.
Having agent name
Rule to indicate the name of the agent that must have the element to be part of the service. A text will be indicated that must be part of the name of the desired agent.
Having module name
Use regular expresions selector
Having custom field name
Rule to indicate the name of the custom field that must have the element to be part of the service. A text that must be part of the name of the desired custom field will be indicated.
Having custom field value
Rule to indicate the value of the custom field that the element must have to be part of the service. A text that must be part of the desired custom field value will be indicated.
You must place text in both fields to be considered when searching in custom fields.
Since version NG 752, it is possible to add searches in more custom fields, these will be selected if they match any of the keyword pairs set.
If you choose to filter the Agents in the group Servers whose Agent's name contains
Firewall and Module name contains
Network you can obtain the following result.
if the configuration of a dynamic element was.
All the modules that in its name include “Host Alive”, in an agent whose name includes “SW”, inside the “Servers” group, with a customized field whose name include “Department” with a value including “Systems”, would be used as service elements.
Dynamic elements are not affected by service cascade protection.
Modules created when configuring a service
- SLA Value Service: The percentage value of SLA compliance. (
- Service_SLA_Service: This shows whether the SLA is met or not. (
- Service_Service: This module shows the sum of the service weights. (
Simple all-service view
It is the operation list that shows all created services. Of course, it only shows those groups that the user that is using the Pandora FMS console has access to. Click Operation > Monitoring and there Services.
Each row represents a service:
The icon of the group the service belongs to.
The threshold value for weight sums to get the service into 'critical' status.
The threshold value for weight sums to get the service into 'warning' status.
The current value for weight sums for the service.
An icon that represents the status of the service. Four possible status are represented:
- Red: The service is in 'critical' status because the value exceeded the critical threshold.
- Yellow: The service is in 'warning' status because the value equaled or exceeded the critical threshold.
- Green: The service is within the 'normal' range because weight sum does not reach the threshold.
- Gray: The service is in 'unknown' status. This usually means the service has been recently created and does not contain any modules or the Pandora FMS Prediction server is down.
The current value of the SLA Service. The values can be:
- OK: The SLA is met for the interval defined in the SLA service.
- INCORRECT: The SLA is not met for the interval currently defined in the SLA Service.
- N/A: The SLA is in 'unknown' status because there is not enough data to perform the calculation.
Table including all services
A table for quick display including all visible services and their current status.
Simple list of a service and its elements
This view is accessible by clicking on the name of a service in the list of all services, or through the magnifying glass icon tab in the service title header.
The list of the elements that make up this service is at the bottom:
- Type: The icon which represents the element type. It is a building block for modules or some stacked blocks for an agent and a Network Diagram Icon for the services.
The text which contains the name of the module, agent or service. They are also linked to the corresponding section.
Text that contains the name of the agent, the name of the agent and module or the name of the service. All of them contain a link to the corresponding operation view.
The value if the element when in 'critical' status. The following three columns (Warning weight, Weight Unknown and Weight OK) correspond to warning, unknown and normal.
The value of the element. It can adopt the following modes:
- Module: The value of the module.
- Agents: The text that displays the agent's status.
- Services: The weight sum of the elements of the service that has been chosen as the element for the parent service.
The icon which represents the element's status by color.
Keep in mind that service-element calculation is performed by Prediction Server. What you look at is not real-time data. There are some situations in which a module's agent is added to the service where its weight will not be updated until calculation is performed by the Prediction Server again.
Service map view
This view will display the service in arborescent form as you can see in the following screenshot. That way, it is possible to quickly see how modules, agents or sub-services influence service monitoring. Even in sub-services you can see what influences them when calculating the status by summing weights.
The possible nodes can be:
It is represented by the 'heartbeat' icon. This node is always final (leaf).
It is represented by the 'CPU box' icon. This module is always final too (leaf).
It is represented by the 'crossed hammer and wrench' icon. This module is not a final node. It is required to contain additional nodes.
The node's colors and the arrow which connects them to the service depend on the node's status, as always green OK, red critical, yellow warning or grey in unknown state.
There are the following attributes within the node:
- Title: The name of the service's / agent's or module's node, accompanied by the agent.
- Value list:
- Critical:: The total weight it reaches in 'critical' status, except if it is the root-service node, which represents a threshold to reach the 'critical' status.
- Warning: The weight if it reaches 'warning' status, except if it is the root-service node, which represents the threshold to reach the 'warning' status.
- Normal: The weight if it reaches 'normal' status, except if it is the root-service node, in which case nothing will be displayed here.
- Unknown: The 'unknown' status, except if it is the root-service node, which represents a threshold to reach the 'unknown' status.
You may click on each node in the tree. The target link represents the operational view of the node itself.
When the service mode is simple, a red exclamation mark appears on the right side of the critical elements.
Services within the Visual Console
From Pandora FMS versions 5 onwards, you may add services in the Visual Console like any other item on the map.
To create a service item on a map, the process is the same as for all other visual map items, but the option range will be:
- Label: The title shown within the visual console's node.
- Service: Drop-down list that shows the services it has access to, to add to the map.
Note that a service item, unlike other items in the visual map, cannot be linked to other visual maps, and always the clickable link in the visual console is intended for the tree service map view described above.
Service tree view
This view allows you to view services in the form of a tree.
Each level shows the total number of elements included in each service or agent.
- Services: It reports the total number of services, agents and modules that belong to that service.
- Agents: It reports the number of modules in critical state (red color), warning (yellow color), unknown (gray color), uninitiated (blue color) and normal state (green color).
Services that do not belong to another one will always be shown on the first level. In the case of a child service, it will be shown nested inside its parent.
ACL permission restriction is only applied to the first level.
How to read service values
Planned shutdowns added before the stop date allow recalculating the value of SLA reports, given that it allows “backwards” recalculating with scheduled shutdowns added afterwards (that option is globally activated in the general setup). When it is an SLA service report, if there is a scheduled shutdown that affects one or several service elements, the scheduled shutdown is considered to affect the service as a whole, since the shutdown impact on the whole service cannot be measured.
It is worth highlighting that this is at a report level. Therefore, service trees, and the information presented in the visual console are not altered based on planned shutdowns added after the intended execution date. These service compliance percentages are calculated in real time, based on the history data of the same service, they do not have anything to do with the actual report.
On the other hand, it is important to know how the compliance percentage of a service is calculated:
Weight calculation in simple mode
Weights are dealt with slightly differently on simple mode, since there is only the critical weight and the possibility of going into two more status apart from the normal one. Each element receives weight 1 on critical and 0 on other status, and each time there is a change in service elements, service weights are calculated again. The warning weight can be overlooked. It always has value 0.5 because if it is 0, the service will always be on warning at least, but warning weight is not used in simple mode. The critical weight is calculated so that it is half of the element critical weights summed, which is 1. If there are 3 elements, the service critical weight is 1.5 and then, it is the server the one in charge or checking whether the critical weight has been exceeded or matched to render the service into critical or warning status.
Weight calculation according to their importance
Suppose there is a service defined by a 95% compliance in a 1-hour interval. A table of values, where t is time, x is the compliance % (SLAs), and s is whether complies or not (1 it complies, 0 it fails), will be used. In 1 hour there should be exactly 12 samples (assuming the interval is 5 minutes long).
Picture a similar case, where the service complies for the first 11 samples (first 55 minutes) and it fails in the 60th minute these would be the values:
t | s | x --------+-------+-------- 1 1 100 2 1 100 3 1 100 4 1 100 5 1 100 6 1 100 7 1 100 8 1 100 9 1 100 10 1 100 11 1 100 12 0 91,6
This case is easier to calculate. The % is calculated depending on the number of samples, for example in t3, there are a total of three samples that meet service, a 100%, whereas in t12, there are 12 samples and 11 are valid samples: 11 / 12.
Suppose you are in the middle of the series, and it is recovering slowly:
t | s | x --------+-------+-------- 1 1 100 2 1 100 3 1 100 4 1 100 5 1 100 6 0 83,3 7 1 85,7 8 1 87,5 9 1 88,8 10 1 90 11 1 90,9 12 1 91,6
So far all seems similar to the previous scenario, but see what happens if you go over time:
t | s | x --------+-------+-------- 13 1 91,6 14 1 91,6 15 1 91,6 16 1 91,6 17 1 91,6 18 1 100 19 1 100 ....
Now there is unintuitive behavior, because the volume of valid samples remains 11 for a time window that goes up to t18, where the only invalid value is out of the window, so in t18 compliance becomes 100%. This step between 91.6 and 100 is explained by the size of the window. The larger the window is (usually SLA calculation interval is daily, weekly or monthly), the less abrupt the step will be.
Service cascade protection
It is possible to mute service elements dynamically. This allows to avoid an alert overload for each element that belongs to a certain service or sub-services.
When the 'service cascade protection' feature is enabled, the action linked to the template configured for the root service will be executed. It will report which the elements have an incorrect status within the service.
It is important to take into account that this system allows the alerts of the elements within the service to be triggered when they go to critical status, even if the general service status is correct.
Service cascade protection will indicate us which elements have failed regardless of the depth of the defined service.
In the example above we see that we have one of the elements of our service in critical status. Even if the main service is correct, it will warn us of the critical state of the elements within, triggering the alert related with the element in critical status.
Root cause analysis
You may have an endless number of sub-services (paths) within a service. In previous versions, Pandora FMS alerted indicating the service status (normal, critical, warning, etc.). From OUM725 on, there is a new macro available that will show the service status root cause.
To use it, add the following text to the template linked to the service:
Alert body: Example message The series of events that have caused the service status is the following one: _rca_
This will return an output similar to this one:
Alert body: Example message The series of events that have caused the service status is the following one: [Web Application -> HW -> Apache server 3] [Web Application -> HW -> Apache server 4] [Web Application -> HW -> Apache server 10] [Web Application -> DB Instances -> MySQL_base_1] [Web Application -> DB Instances -> MySQL_base_5] [Web Application -> Balanceadores -> 192.168.10.139]
By seeing this output, it is supposed that:
- Apache servers 3,4 and 10 are in critical status
- MySQL_base databases 1 and 5 are down
- The 192.168.10.139 balancer does not respond
This added information allows to find out the reason behind the service status, reducing failure cause research tasks.
Services are logical groupings that make up an organization's business structure. That is why service grouping may make sense, since they depend on each other in many cases, creating for example a whole service (the business company) or more specific services (corporate web, communications, etc.). To group services, both the general and more particular services must be created, and the last ones must be added to the first one to create the logical tree-shaped structure.
This groups may help you to: create visual maps, configure alerts, apply monitoring policies, etc. Therefore, it is possible to create alerts that warn you when the business goes into critical status because sales representatives cannot do their job, or any branch is not working full capacity due to technichal problems with the ERP service.
To understand more clearly what service grouping is, take a look at these examples.
Service monitoring examples
Pandora FMS service
Use case where the status of Pandora FMS monitoring service made by Apache and MysSQL services and Pandora FMS server and Tentacle, with their respective weights, is monitored.
Each of these elements is at the same time a service with different components, creating through service grouping a tree-shaped structure.
In this case, the general Pandora FMS service will go into
critical status when reaching weight 2 and
warning when it reaches weight 1.
As seen, the four components have different weights on Pandora FMS service:
- MySQL: It is essential for Pandora FMS service. Individual weight of 2 if MySQL is down. It will get a weight of 1 if it is in warning status, showing a warning in Pandora FMS service.
- Pandora Server: It is essential for Pandora FMS service. Individual weight of 2 if the Pandora FMS Server is down. Individual weight of 1 if it is in warning status, for example, due to CPU overload, scaling the warning until reaching Pandora FMS general service.
- Apache: It implies a degrading of Pandora FMS service, but not a total interruption, so it gets an individual weight of 1 if it is down, showing the warning status in Pandora FMS service.
- Tentacle: It entails a degrading, and certain components may fail, but it does not Mean Pandora FMS stops working completely, so its individual weight in case of failure is 1, showing a warning in the general service.
Cluster storing service, service grouping
Services are logical groups that make up part of the business structure of an organization. Therefore, service grouping is reasonable since sometimes some services on their own do not have a complete meaning. To group services, they just need to be added to a greater service as elements, creating a new logical group.
In the following example, there is an HA storing cluster. This time, a system of two fileservers working at the same time has been chosen, each one controlling the percentage and the status of a series of hard drives that provide service to particular departments, creating a group service tree-shaped structure.
According to this structure, the critical threshold of the company's storing service is reached when both fileservers fail, since that would turn down the service, while just one of them failing would entail a service downgrading. The following image contains weight configuration granted to two storing service main elements:
This image shows the content and weight configuration of the FS01 grouped service. Here the elements have a specific weight according to their severity:
- FS01 ALIVE: Critical for the FS01 service, since it is the virtual IP allocated to the first hard drive cluster. Individual weight of 2, since if it is down, the rest of the service elements will not work. There is no
warningthreshold, since it is data that depends on the status Yes/No.
- DHCPserver ping: critical for the FS01 service. It has an individual weight of 2. In this case, there is no
- Hard drives: They have an individual weight of 1 in case they reach their critical threshold, and 0.5 for their
warningthreshold, so this will only affect critically the FS01 service if there are at least two in critical status or the four hard drives in warning status.