Table of Contents
Log Monitoring and Log Collection
Introduction
Log monitoring in Pandora FMS is approached in two different ways:
- Based on modules: it represents logs in Pandora as asynchronous monitors, being able to associate alerts to the detected inputs that fulfill a series of preconfigured conditions by the user. The modular representation of the logs allows you to:
- Create modules that count the occurrences of a regular expression in a log.
- Obtain the lines and context of log messages
- Based on combined display: it allows the user to view in a single console all the information from logs of multiple origins that you may want to capture, organizing the information sequentially using the timestamp in which the logs were processed.
From version 7.0 NG 712, Pandora FMS incorporates Elasticsearch to store log information, which implies a significative performance improvement.
How it works
- The logs analyzed by the software agents (eventlog or text files) are forwarded to Pandora Server in RAW form within the XML reporting agent:
- Pandora FMS data server receives the XML agent, which contains information about both monitoring and logs.
- When the DataServer processes XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log, automatically sending information to Elasticsearch in order to be stored.
- Pandora FMS stores the data in Elasticsearch indexes generating a daily index for each Pandora FMS instance.
- Pandora FMS server has a maintenance task that deletes indexes in the interval defined by the system admin (90 days by default).
Server Requirements
It is recommended to distribute Pandora, FMS Server and Elasticsearch in independent servers.
- Rocky Linux 8 or RHEL 8.
- At least 4 GB of RAM, although 6 GB of RAM are recommended for each Elasticsearch instance.
- Disable SWAP on the node(s) where Elasticsearch is located.
- At least 2 CPU cores.
- At least 20 GB of disk space for the system.
- At least 50GB of disk space for Elasticsearch data (the amount can be different depending on the amount of data to be stored). Elasticsearch disk usage is very intensive, so the faster the read and write speed, the better the performance of the environment.
- Connectivity from Pandora FMS server to Elasticsearch API (port 9200/TCP by default).
With a single node environment with these characteristics, up to 1 GB of data can be stored daily and stored for the default time of 8 days.
In the case of requiring greater data resilience and fault tolerance, it will be necessary to configure an Elasticsearch cluster (minimum 3 nodes to guarantee data integrity). When moving to a cluster environment it is also possible to distribute the load among the nodes, doubling (in the case of 3 nodes) the processing capacity of the environment. A load balancing system will be necessary if you want to attack with the different nodes simultaneously.
Installing and configuring Elasticsearch
Elasticsearch official documentation:
Installation
For Rocky Linux 8 it is recommended to install using the RPM package, it is a single package that contains everything needed to install the Elasticsearch database.
For download go to https://www.elastic.co/downloads/elasticsearch and select Linux x86_64
( AMD® or Intel® 64 bits processors).
Once you have downloaded the package, you must upload it to the server where you will install Eleasticsearch, go to that directory and run with sufficient rights:
dnf install ./downloaded_packet.rpm
You will get an output similar to:
To verify that the service was installed correctly you can run the command:
systemctl status elasticsearch.service
You will get an output similar to:
Note that the Elasticsearch service is inactive.
Node configuration
You must first edit the configuration file
/etc/elasticsearch/elasticsearch.yml
and then start the Elasticsearch service.
This file contains the configuration of all the parameters of the Elasticsearch service, see the official documentation for more information:
Next, the minimum configurations required to start the service and its use with Pandora FMS will be described.
- Set the port number, data location and location of the event log file:
# ---------------------------------- Network ----------------------------------- # Set a custom port for HTTP: http.port: 9200 # ----------------------------------- Paths ------------------------------------ # Path to directory where to store the data (separate multiple locations by a comma): path.data: /var/lib/elastic # Path to log files: path.logs: /var/log/elastic
- Configure
xpack
:
xpack.security.enabled: false xpack.security.enrollment.enabled: false
- Comment this lines:
#http.host: [_local_] #transport.host: [_local_]
It will also be necessary uncomment and define the following lines:
cluster.name: pandorafms node.name: ${HOSTNAME} network.host: 0.0.0.0
cluster.name
This will be the name of the group or cluster.
node.name
To name the node using the ${HOSTNAME}
system variable, it will automatically take the name of the host.
network.host
For network.host
value 0.0.0.0
let Elasticsearch “hear” in all Network Interface Card (NIC), set a specific value for use a specifis NIC.
In case of working with a cluster you need to complete the discovery.seed_hosts
(see “Configuring a cluster of Elasticsearch servers” for more information):
discover.seed_hosts : ["ip:port", "ip", "ip"]
Or (format example):
discovery.seed_hosts: - 192.168.1.10:9300 - 192.168.1.11 - seeds.mydomain.com
In the most recent versions of Elasticsearch the memory management of the Java® virtual machine is done automatically and it is recommended to let it be managed this way in production environments, so it is unnecessary to modify the values of the Elasticsearch JVM.
Once finished, it will be necessary to execute:
systemctl start elasticsearch.service
Wait a few moments while Elasticsearch starts, be patient. The command to query the status is:
systemctl status elasticsearch.service
You will see something similar to this:
If the service fails to start, check the logs located in /var/log/elastic/
(in this case the file pandorafms.log
or the name given to the node).
To test the installation of Elasticsearch run the following command in a terminal window:
curl -q http://{IP}:9200/
Replace {IP}
with the IP address or URL of the installed Elasticsearch.
You will receive a response similar to the following:
{ "name" : "3743885b95f9", "cluster_name" : "docker-cluster", "cluster_uuid" : "7oJV9hXqRwOIZVPBRbWIYw", "version" : { "number" : "7.6.2", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.4.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
It is recommended to visit the Elasticsearch best practices link for production environments:
Elasticsearch cluster configuration
- The minimum size of an Elasticsearch cluster is 3 nodes and it must always grow in odd numbers in order to make use of the quorum system and guarantee data integrity.
- Ensure that you have connectivity between all 3 nodes and that ports 9200 and 9300 are accessible between each and every node.
Remember to configure the firewall of each node to allow connection through these port numbers.
Stop the Elasticsearch service on each and every node:
systemctl stop elasticsearch.service
Modify the following lines in the configuration file /etc/elasticsearch/elasticsearch.yml
:
#discovery.seed_hosts: ["host1", "host2"] #cluster.initial_master_nodes: ["host1", "host2"]
Uncommentthe lines and add the IP addresses or URLs of each node:
discovery.seed_hosts: ["host1", "host2", "host3"] cluster.initial_master_nodes: ["host1", "host2", "host3"]
Example with IP addresses:
discovery.seed_hosts: ["172.42.42.101", "172.42.42.102", "172.42.42.103"] cluster.initial_master_nodes: ["172.42.42.101", "172.42.42.102", "172.42.42.103"]
Make sure that the line cluster.initial_master_nodes
is defined only once in the configuration file, in some cases the same line appears in two different blocks of the same file.
Before starting the service, because the nodes were started for the first time on their own (standalone), the contents of the data folder (by default /var/lib/elasticsearch/
) must be deleted in order to start the cluster for the first time. Do this with the command:
rm -rf /var/lib/elasticsearch/*
Now it is time to start the services on each and every node. Start and check that they are running with the commands:
systemctl start elasticsearch.service && systemctl status elasticsearch.service
You should get an output similar to:
Once the services have been started, you must confirm that the 3 nodes are joined to the cluster correctly, so when executing the following command on any of the nodes, the same response should be given:
curl -XGET http://127.0.0.1:9200/_cat/nodes
Check again the firewall configuration always taking into account that the nodes should communicate through ports 9200
and 9300
and that from the PFMS server and the PFMS Web Console should be able to access port 9200
as well. With these steps you will have already installed and configured the Elasticsearch cluster to be used as Pandora FMS log storage engine.
Data model and templates
Before putting into production an environment, either a single node or a data cluster, it is recommended to apply the corresponding configurations to this node or cluster according to its use. In the case of the indexes generated by Pandora FMS, the most effective way to do it is defining a template to define the configuration of the fields and the stored data.
Templates are settings that are only applied when the index is created. Changing a template will have no impact on existing indexes.
To create a basic template, you only have to define the fields:
{ "index_patterns": ["pandorafms*"], "settings": { "number_of_shards": 1, "auto_expand_replicas" : "0-1", "number_of_replicas" : "0" }, "mappings" : { "properties" : { "agent_id" : { "type" : "long", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "group_id" : { "type" : "long", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "group_name" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "logcontent" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "source_id" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "suid" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "type" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "utimestamp" : { "type" : "long" } } } } }
Through the Elasticsearch interface in Pandora FMS (Admin tools → Elasticsearch Interface) and using the native Elasticsearch command you can upload such template.
We can perform these operations through the Elastic Search interface in Pandora FMS using the native Elastics Search commands.
- PUT _template/<template_name>: for this example
PUT _template/pandorafms
.
You will also be able to consult the templates through the same Pandora FMS interface:
- GET _template/<template_name>: for this example
GET _template/pandorafms
.
Multinode templates
To define a multinode template you must take into account the following information:
- When configuring the template (JSON format), you need to configure as many searches as you have nodes, however to correctly configure the replicas you must subtract 1 from the number of nodes in the environment.
For example, in a Pandora FMS environment with Elasticsearch with 3 configured nodes, when you modify the number_of_search
and number_of_replicas
fields it should look like this:
{ "index_patterns": ["pandorafms*"], "settings": { "number_of_shards": 3, "auto_expand_replicas" : "0-1", "number_of_replicas" : "2" },
This is a very basic definition, in order to correctly define the sizing of the Elasticsearch environment it is advisable to take into account the factors described in this article:
From the command line you can list the templates of the environment by executing:
curl -X GET "localhost:9200/_cat/templates/*?v=true&s=name&pretty"
You can also view the details of a template, for example the one we have created for pandorafms by running it:
curl -X GET "localhost:9200/_template/pandorafms*?pretty"
which will return in JSON format the configuration you have defined.
You can perform these operations through the Elasticsearch interface in Pandora FMS using the native Elasticsearch commands.
- PUT _template/<template_name> {json_data}: allows you to enter the data of the template to be created.
- GET _template/><template_name>: allows you to display the created template.
Recommendations
Log rotation for Elasticsearch
Important: It is recommended to create a new entry for daemon rotation logs in /etc/logrotate.d
, to prevent Elasticsearch logs from endlessly growing:
cat> /etc/logrotate.d/elastic <<EOF /var/log/elastic/elaticsearch.log { weekly missingok size 300000 rotate 3 maxage 90 compress notifempty copytruncate } EOF
Index Purging
You may check at any time the list of indexes and their size by launching a cURL petition against its Elasticsearch server:
curl -q http://<elastic>:9200/_cat/indices?
Where elastic
is the server's IP.
To remove any of these indexes, execute the DELETE
command:
curl -q -XDELETE http://<elastic>:9200/{index-name}
Where elastic
is the server's IP, and {index-name}
is the output file of the previous command. This will free up the space used by the removed index.
Pandora FMS SyslogServer
This component allows Pandora FMS to analyze the Syslog of the machine where it is located, analyzing its content and storing the references in the Elasticsearch server.
The main advantage of SyslogServer lies in complementing log unification. Based on the exportation characteristics of SYSLOG from Linux and Unix environments, SyslogServer allows to consult logs regardless of their origin, searching in a single common point (Pandora FMS console log viewer).
Syslog installation is done both in client and server and to execute it, launch the following command:
yum install rsyslog
Bear in mind once Syslog is installed on the computers you wish to work with, you need to access the configuration file /etc/rsyslog.conf
to enable TCP and UDP input.
(...) # Provides UDP syslog reception $ModLoad imudp $UDPServerRun 514 # Provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 514 (...)
After adjusting this, stop and restart the rsyslog service. After the service runs again, check the ports to see whether port 514
can be accessed.
netstat -ltnp
For more information about rsyslog configuration, visit the official website.
Configure the client so that it sends logs to the Syslog server. To that end, go to the client rsyslog configuration file at /etc/rsyslog.conf
and locate and enable the line that allows configuring the remote host.
.* @@remote-host:514
Log sending generates a container agent with the client name, so it is recommended to create agents with “alias as name” matching the client's hostname avoiding agent duplication.
To enable this feature in Pandora FMS Server, enable in the pandora_server.conf
file the following content:
# Enable (1) or disable (0) the Pandora FMS Syslog Server # (PANDORA FMS ENTERPRISE ONLY). syslogserver 1 # Full path to syslog's output file (PANDORA FMS ENTERPRISE ONLY). syslog_file /var/log/messages # Number of threads for the Syslog Server # (PANDORA FMS ENTERPRISE ONLY). syslog_threads 2 # Maximum number of lines queued by the Syslog Server's # producer on each run (PANDORA FMS ENTERPRISE ONLY). syslog_max 65535
syslogserver
Boolean, enables (1) or disables (0) the local SYSLOG analysis engine.
syslog_file
Location of the file where the SYSLOG entries are delivered.
syslog_threads
Maximum number of threads to be used in the SyslogServer producer/consumer system.
syslog_max
It is the maximum processing window for SyslogServer, it will be the maximum number of SYSLOG entries that will be processed in each iteration.
You will need an Elasticsearch server enabled and configured; please review the preceding points for how to work with it.
Remember: It is necessary to modify the configuration of your device so that logs are sent to Pandora FMS server.
Migration to Elasticsearch system
Version 712 or earlier. You will then need to upgrade to the current version, see “PFMS Upgrade” for more information.
After setting the new log storage system, migrate all data previously stored in Pandora FMS to the new system, in a distributed way among the directories.
To migrate it to the new system, run the following script that can be found in /usr/share/pandora_server/util/
:
# Migrate Log Data <7.0NG 712 to>= 7.0NG 712 /usr/share/pandora_server/util/pandora_migrate_logs.pl /etc/pandora/pandora_server.conf
Console Settings
To enable the log system display, enable the following configuration (Setup → Setup → Enterprise):
Then set the log viewer performance in the Setup → Setup → Log Collector:
On this screen configure:
- IP or FQDN address of the server that hosts the Elasticsearch service
- Number of logs being shown. To speed up the response of the console, record dynamic loading has been added. To use this, the user must scroll to the bottom of the page, forcing the loading of the next set of available records. The size of these groups can be set in this field as the number of records per group.
- Days to purge old information': To prevent the size of the system, you can define a maximum number of days in which the log information will be stored, from that date they will be automatically deleted in Pandora FMS cleaning process.
Elasticsearch Interface
In the default configuration, Pandora FMS generates an index per day, which Elastics is in charge of fragmenting and distributing in such a way that when you look for something, Elastic knows where to find the search or fragment.
For this search to be optimal, Elastics generates an index for each search by default, so you must configure in your environment as many searches as Elastics nodes you have.
These searches and replicas are configured when an index is created, that Pandora FMS generates automatically, so to modify this configuration you should use the templates.
Data backup and restoration
A data snapshot (indexes) is the mechanism that recent versions of Elastichsearch use to back up data. These snapshots can be used to recover data after a hardware failure, to transfer data between nodes, and even to remove rarely used indexes from the node(s) (the latter requires additional configuration).
These snapshots work by backing up data incrementally, i.e. they copy only the new data that has not been backed up while ensuring that the backups already made are reliable and compatible between different versions of Elasticsearch.
For Elasticsearch the way to guarantee all these features is through repositories.
The repositories can be your own or made by third parties (AWS S3®, Google Cloud Storage®, Microsoft Azure®) and in any case must be physically outside the node or nodes that you use in conjunction with Pandora FMS. You are solely responsible for these snapshots.
See the official Elasticsearch documentation for more information:
Create a repository
A network file system (NFS) or other shared filesystem must be available from the machine(s) that will host the Elasticsearch repository and the node(s) in the environment.
Pandora FMS independently also uses NFS to share the exchange directory between several servers: refrain from using this NFS to host Elasticsearch repository(s). You are solely responsible for properly configuring each of the components of your system.
Once you have installed and configured the target NFS, proceed to create and mount a directory on the Elastisearch node(s), for example it can be called:
/mnt/pandorafms/elk_repo
Grant permissions for the user elasticsearch
:
chown elasticsearch: /mnt/pandorafms/elk_repo
You must declare this path in the Elasticsearch configuration file as the repository path on the node(s) (all nodes):
path: repo: - /mnt/pandorafms/elk_repo
When you have configured the node(s) you must restart the elasticsearch service (on all nodes):
systemctl start elasticsearch.service && systemctl status elasticsearch.service
Apart from the Elasticsearch interface in Pandora FMS you can also use the curl command to get information or communicate orders to the Elastisearch node or nodes. To create the repository execute locally in the node or nodes (all nodes) the following command:
curl -X PUT "localhost:9200/_snapshot/backup_repo?pretty" -H 'Content-Type: application/json' -d' { "type": "fs", "settings": { "location": "/mnt/pandorafms/elk_repo/" } } '
If you use a port other than 9200
, replace it with that value.
You should get the following message from the node(s):
"acknowledged" : true
This will indicate that the repository has been created. To check the status of the repository:
curl -X POST "localhost:9200/_snapshot/my_unverified_backup/_verify?pretty"
Replace my_unverified_backup
with the name of the repository to verify. If everything went correctly, you will receive a list of the nodes on which the repository is configured.
Generate a snapshot of the database
To take a snapshot manually, use the snapshot creation API. The snapshot name supports the use of date math to give a unique name.
PUT _snapshot/my_repository/<my_snapshot_{now/d}>
Replace my_repository
with the name of your repository and my_snapshot
with the name of the snapshot. If you use curl you must use escape characters, so the above command would look like this:
PUT _snapshot/my_repository/%3Cmy_snapshot_%7Bnow%2Fd%7D%3E
Depending on its size, a snapshot may take some time to complete. By default, the snapshot creation API only starts the snapshot process, which runs in the background. To block the client until the snapshot is finished, set the query parameter wait_for_completion
to true.
PUT _snapshot/my_repository/my_snapshot?wait_for_completion=true
To perform a snapshot named: snapshot_today
, you can run it on one of the nodes:
curl -X PUT "localhost:9200/_snapshot/backup_repo/snapshot_today?wait_for_completion=true&pretty"
If you use a port other than 9200, replace it with that value.
With the parameter wait_for_completion=true
the call will remain active until the process is finished (it may take some time, depending on the size of the database).
As soon as it finishes, it will return in JSON form the summary information of the process, it will be something similar to this:
It is also possible to define specific options in the snapshot execution such as the indexes to include or metadata, for more details visit:
Example:
curl -X PUT "localhost:9200/_snapshot/backup_repo/snapshot_2?wait_for_completion=true&pretty" -H 'Content-Type: application/json' -d' { "indices": "pandorafms*", "metadata": { "taken_by": "PandoraFMS admin user", "taken_because": "backup before upgrading" } } '
List of snapshots
To get a list of all stored snapshots you can run the command:
curl -X GET "localhost:9200/_snapshot/backup_repo/*?pretty"
Where backup_repo
is the repository id and *
represents all. For more information about snapshot search filters in Elasticsearch visit:
Deleting snapshots
To delete a snapshot you must obtain its name from the above command and then execute it on one of the nodes:
curl -X DELETE "localhost:9200/_snapshot/backup_repo/snapshot_today?pretty"
Restore a database snapshot
To restore an index from a snapshot it must be closed, apart from other technical considerations. Please refer to this link for more information:
To restore an index, one of two ways must be used:
- Delete the original index before restoring.
- Rename the restored index.
Both cases are presented below using backup_repo
for the repository name and snapshot_today
for the snapshot name as examples.
- Delete and restore:
The easiest way to avoid conflicts is to delete an existing index or data stream before restoring it.
To avoid accidental recreation of the index or data stream, it is recommended to temporarily stop all indexing until the restore operation is completed.
To delete an index:
curl -X DELETE "localhost:9200/my-index?pretty"
To restore an index:
curl -X POST "localhost:9200/_snapshot/backup_repo/snapshot_today/_restore?pretty" -H 'Content-Type: application/json' -d' { "indices": "my-index,logs-my_app-default" } '
- Rename when restoring
Make sure you have enough storage space for this operation.
In this way you will repeat the information you already have stored and in some scenarios this process can be useful, for example:
- You need to confirm that a successful data retrieval has been performed. Each of the indexes and its renamed copy should contain the same information and return the same search results.
- Validate data audits performed by third parties.
curl -X POST "localhost:9200/_snapshot/backup_repo/snapshot_today/_restore?pretty" -H 'Content-Type: application/json' -d' { "indices": "my-index,logs-my_app-default" "rename_pattern": "(.+)", "rename_replacement": "restored-$1" } '
Completely restore a node
In the case of wanting to restore an entire node with all its indexes, it is recommended to stop the indexing services before executing the restore, for more information on this topic visit:
Display and Search
In a log collecting tool, two things are the main concerns: looking for information, filtering by date, data sources and/or keywords, and seeing that information(Log viewer) drawn in occurrences by time unit. In this example, all log messages from all sources in the last hour are looked for. Look at Search, Start date and End date:
The most important -and useful- field will be the string to look for to be entered in the Search text box, together with the three available Search modes.
Exact match
Literal string search, the log matches exactly.
All words
Search that contains all the indicated words, regardless of the order in a single log line (bear in mind in the future that each word is separated by spacest).
Any word
Search that contains some of the indicated words, regardless of the order.
If you check the option to view the context of the filtered content, you will get an overview of the situation with information from other log lines related to the search:
Display and advanced search
With this feature, log entries can be turned into a graphic, sorting out the information according to data capture templates.
These data capture templates are basically regular expressions and identifiers, that allow analyzing data sources and showing them as a graphic.
To access advanced options, press Advanced options. A form, where the result view type can be chosen, will appear:
- Show log entries (plain text).
- Show log graphic.
Under the show log graphic option (Display mode), the capture template can be selected.
The Apache log model template by default offers the possibility of parsing Apache logs in standard format (access_log), enabling retrieving time response comparative graphics, sorting by visited site and response code:
By pressing the edit button , the selected capture template is edited. With the create button
, a new capture template is added.
In the form, the following can be chosen:
Capture regexp
A regular expression for data capture. Each field to be retrieved is identified with the sub expression between pbrackets(expression to capture).
Fields
Fields following the order in which they have been captured through the regular expression. The results will be sorted by key field concatenation, those whose name is not written between underscores:
key, _value_
key1,key2,_value_
key1,_value_,key2
Comments: If the value field is not specified, it will be the number of regular expression matches automatically.
Comments 2: If a value column is specified, you may choose either representing the accumulated value (performance by default) or checking the checkbox to represent the average.
Example
If log entries must be retrieved with the following format:
Sep 19 12:05:01 nova systemd: Starting Session 6132 of user root. Sep 19 12:05:01 nova systemd: Starting Session 6131 of user root.
To count the number of loins by user, use:
Regular expression
Starting Session \d+ of user (.*?)\.
Fields:
username
This capture template will return the number of logins by user during the selected time range:
Frequent filters
Version 771 or later
With this option you may save frequently used filtering preferences, thus creating a list of frequently used filters. Once you have configured all filter values, click Save filter, assign a name and click Save. At any other time you may load these preferences by means of the Load filter button, then from the drop down list of saved filters select one of them and click Load filter.
Filters saved as favorite items
NG 770 version or later.
Using the Favorite system, you can save a shortcut to the Log viewer with filtering preferences by clicking on the star icon in the section title.
You will be prompted for a name to save as a favorite the filtering conditions you set.
Log viewer filter preferences will be saved in their corresponding section in Favorite (Operation menu).
Agent configuration
Log collection is done by both Windows and Unix agents (Linux®, MacOsX®, Solaris®, HPUX®, AIX®, BSD®, etc.). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.
Here are two examples to capture log information on windows and Unix:
Example for MS Windows
From version 750 onwards this action can be done through the agent plugins by activating the Advanced option.
You will be able to perform executions of the type shown below:
Logchannel module
module_begin module_name MyEvent module_type log module_logchannel module_source <logChannel> module_eventtype <event_type/level> module_eventcode <event_id> module_pattern <text substring to match> module_description <description> module_end
Logevent module
module_begin module_name Eventlog_System module_type log module_logevent module_source System module_end
Regexp module
module_begin module_name PandoraAgent_log module_type log module_regexp <%PROGRAMFILES%>\pandora_agent\pandora_agent.log module_description This module will return all lines from the specified logfile module_pattern .* module_end
For more information about the description of log type modules you can check the following section referring to specific Directives.
module_type log
When defining this kind of tag, module_type log
, you will be indicating that it is not stored in the database, but that it is sent to the log collector. Any module with this type of data will be sent to the collector if it is enabled, otherwise the information will be discarded.
Nota: This new syntax is valid for agents version 5.0 or higher. Remember to keep your Enterprise version updated.
Example for Unix Systems
With agent version 5.0 is used, you may use the following syntax:
module_plugin grep_log_module /var/log/messages Syslog \.\*
Similar to the parsing logs plugin (grep_log), grep_log_module plugin sends the processed log information to the log collector named “Syslog” as the source of the log. Use the \.\*
regular expression (In this case “all”) as the pattern when choosing which lines will be sent and which ones will not.
Log Source on Agent View
From Pandora FMS version 749 onwards, a box called Log sources status has been added to the Agent View, where the date of the last log update by that agent will appear. By clicking on the Review magnifying glass icon, you will be redirected to the Log Viewer view filtered by that log.