Difference between revisions of "Pandora: Documentation en: Log Monitoring"

From Pandora FMS Wiki
Jump to: navigation, search
(Display and Search)
(Console Settings)
 
(23 intermediate revisions by 4 users not shown)
Line 14: Line 14:
 
#'''Based on combined display''': it allows the user to view in a single console all the information from logs of multiple origins that you may want to capture, organizing the information sequentially using the timestamp in which the logs were processed.
 
#'''Based on combined display''': it allows the user to view in a single console all the information from logs of multiple origins that you may want to capture, organizing the information sequentially using the timestamp in which the logs were processed.
  
From version 7.0NG 712, Pandora FMS incorporates '''LogStash + ElasticSearch''' to store log information, which implies a significative improvement in performance.
+
From version 7.0NG 712, Pandora FMS incorporates '''ElasticSearch''' to store log information, which implies a significative performance improvement.
  
 
== How it works ==
 
== How it works ==
Line 20: Line 20:
  
 
<center><br><br>
 
<center><br><br>
[[Image:Esquemas-logs.png|650px]]
+
[[Image:LogsEsquema.png|650px]]
 
</center><br><br>
 
</center><br><br>
  
* The logs analyzed by the agents ('''eventlog''' or text files) are forwarded to Pandora Server, literally (RAW) within the XML reporting agent:
+
* The logs analyzed by the agents ('''eventlog''' or text files) are forwarded to Pandora Server in RAW form within the XML reporting agent:
 
* Pandora server (DataServer) receives the XML agent, which contains information about both monitoring and logs.
 
* Pandora server (DataServer) receives the XML agent, which contains information about both monitoring and logs.
*  When DataServer processes XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log, automatically sending information to LogStash in order to be stored.
+
*  When the DataServer processes XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log, automatically sending information to ElasticSearch in order to be stored.
* LogStash stores the information in Elasticsearch.
+
* Pandora FMS stores the data in Elasticsearch indexes generating a daily index for each Pandora FMS instance.
* Finally, the log information can be checked through viewfinder in Pandora FMS console. The console will perform queries against the configured Elasticsearch server.
+
* Pandora FMS server has a maintenance task that deletes indexes in the interval defined by the system admin (90 days by default).
  
 
== Configuration ==
 
== Configuration ==
Line 33: Line 33:
 
=== Server Configuration ===
 
=== Server Configuration ===
  
The new storage log system, which is based on ElasticSearch + LogStash requires configuring several components.
+
The new storage log system,based on ElasticSearch requires configuring several components.
  
 +
{{Warning|From Pandora FMS version 745 onwards, there is no need to use LogStash, since the Pandora FMS server communicates directly with ElasticSearch, so LogStash related configurations do not need to be applied.}}
  
 
==== Server Requirements ====
 
==== Server Requirements ====
  
Each component (Pandora FMS Server, Elasticsearch, LogStash) can be distributed on separate servers.
+
Each component (Pandora FMS Server, Elasticsearch) can be distributed on separate servers.
  
 
If you choose to place Elasticsearch and LogStash on the same server these are recommended:
 
If you choose to place Elasticsearch and LogStash on the same server these are recommended:
  
* At least 4GB of RAM
+
* Centos 7.
 +
* At least 4GB of RAM, although 6GB of RAM are recommended for each ElasticSearch instance.
 
* At least 2 CPU cores
 
* At least 2 CPU cores
* At least 20GB of disk space for the system
+
* At least 20GB of disk space for the system.
* At least 50GB of disk space for the mount point/var, mounted as LVM
+
* At least 50GB of disk space for ElasticSearch data (the amount can be different depending on the amount of data to be stored).
* Connectivity with the 10516/TCP port from Pandora FMS server to LogStash and 9200/TCP from the Pandora FMS console to Elasticsearch
+
* Connectivity wfrom Pandora FMS server to Elasticsearch API (port 9200/TCP by default).
 
 
If you have a machine that hosts a historical database, the same one could be used to install Elasticsearch and LogStash. In that case, the minimum requirements of the machine should be adjusted to the amount of data that will be processed in both cases, the minimum being:
 
 
 
* At least 4GB of RAM
 
* At least 4 CPU cores
 
* At least 20GB of disk space for the system
 
* At least 50GB of disk space for the mount point/var, mounted as LVM
 
  
 
==== Installing and configuring ElasticSearch ====
 
==== Installing and configuring ElasticSearch ====
Line 60: Line 55:
 
  yum install java
 
  yum install java
  
Once installed, install Elasticsearch from the downloadable RPM from the Elasticsearch project website: https://www.elastic.co/downloads/elasticsearch
+
Once installed, install Elasticsearch following the official documentation: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/install-elasticsearch.html
  
Once the package is downloaded, install it executing:
+
When installing in CentOS/Red Hat systems, the recommended installation is by means of rpm:
 +
https://www.elastic.co/guide/en/elasticsearch/reference/7.6/rpm.html
  
rpm -i elasticsearch-X.X.X-x86_64.rpm
 
  
 
Configure the service:
 
Configure the service:
Line 71: Line 66:
  
 
  # ---------------------------------- Network -----------------------------------
 
  # ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
 
network.host: 0.0.0.0
 
 
  # Set a custom port for HTTP:
 
  # Set a custom port for HTTP:
 
  http.port: 9200
 
  http.port: 9200
Line 81: Line 74:
 
  path.logs: /var/log/elastic
 
  path.logs: /var/log/elastic
  
Enter the server's IP in the network.host parameter.
+
Uncomment and define the following lines as follows: Enter the server's IP in the network.host parameter.
 +
 
 +
cluster.name: elkpandora
 +
node.name: ${HOSTNAME}
 +
bootstrap.memory_lock: true
 +
network.host: ["127.0.0.1", “IP"]
  
The options of the resources allocated to ElasticSearch must be adapted, adjusting the parameters available in the configuration file located at ''/etc/elasticsearch/jvm.options''
+
* <b>cluster.name</b>: Cluster name.
 +
* <b>node.name</b>: To name the node, with ${HOSTNAME} it will take that of the host.
 +
* <b>bootstrap.memory_lock</b>: It must always be "true".
 +
* <b>network.host</b>: Server IP.
 +
 
 +
If we are working with just one node, it will be necessary to add the following line:
 +
 
 +
discovery.type: single-node
 +
 
 +
If we are working with a cluster, we will need to complete the <b>discovery.seed_hosts</b> parameter.
 +
 
 +
discover.seed_hosts : ["ip", "ip", "ip"]
 +
 
 +
Or:
 +
 
 +
discovery.seed_hosts:
 +
  - 192.168.1.10:9300
 +
  - 192.168.1.11
 +
  - seeds.mydomain.com
 +
 
 +
The options of the resources allocated to ElasticSearch must be adapted, adjusting the parameters available in the configuration file located at ''/etc/elasticsearch/jvm.options''. Use at least 2GB in XMS.
  
 
  # Xms represents the initial size of total heap space
 
  # Xms represents the initial size of total heap space
Line 90: Line 108:
 
  -Xmx512m
 
  -Xmx512m
  
 +
The resources will be assigned according to the use of ElasticSearch. It is recommended to follow the official ElasticSearch documentation:
 +
https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
 +
 +
It is necessary to modify the parameter <b>memlock unlimited</b> in ElasticSearch configuration file.
 +
 +
The path to the file is:
 +
 +
/usr/lib/systemd/system/elasticsearch.service
 +
 +
Where we will need to add the following parameter:
 +
 +
MAX_LOCKED_MEMORY=unlimited
 +
 +
Once finished, it will be necessary to run the following command:
 +
 +
systemctl daemon-reload && systemctl restart elasticsearch
 +
 +
The command to start the service is:
 +
 +
systemctl start elasticsearch
  
 
Start the service:
 
Start the service:
Line 97: Line 135:
 
'''Note''': If the service fails to start, check the logs located at /var/log/elasticsearch/
 
'''Note''': If the service fails to start, check the logs located at /var/log/elasticsearch/
  
'''Note 2''': If you try to install it on Centos 6 against our recommendation, there is a problem with the latest versions of ElasticSearch (5. X) since they require an extra kernel-level feature that CentOS 6 does not offer. You can add the following lines to the yml configuration file to disable the use of bootstrap and avoid errors.
+
To check ElasticSearch installation, just execute the following command:
  
  bootstrap.system_call_filter: false
+
  curl -q http://{IP}:9200/
  transport.host: localhost
+
 
 +
Which should return an output similar to this one:
 +
 
 +
{
 +
  "name" : "3743885b95f9",
 +
  "cluster_name" : "docker-cluster",
 +
  "cluster_uuid" : "7oJV9hXqRwOIZVPBRbWIYw",
 +
  "version" : {
 +
    "number" : "7.6.2",
 +
    "build_flavor" : "default",
 +
    "build_type" : "docker",
 +
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
 +
    "build_date" : "2020-03-26T06:34:37.794943Z",
 +
    "build_snapshot" : false,
 +
    "lucene_version" : "8.4.0",
 +
    "minimum_wire_compatibility_version" : "6.8.0",
 +
    "minimum_index_compatibility_version" : "6.0.0-beta1"
 +
  },
 +
  "tagline" : "You Know, for Search"
 +
  }
 +
 
 +
 
 +
 
 +
 
 +
It is advised to visit the link to ElasticSearc best practices for production environments: https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config.html#dev-vs-prod
 +
 
 +
<br>
 +
<br>
  
 
==== Installing and configuring LogStash ====
 
==== Installing and configuring LogStash ====
  
Install LogStash from the downloadable RPM from the Elasticsearch project website: https://www.elastic.co/downloads/logstash
+
{{Warning|From Pandora FMS version 745 onwards, there is <b>no</b> need to install LogStash.}}
 +
 
 +
Install LogStash 5.6.2 from the downloadable RPM from the Elasticsearch project website: https://artifacts.elastic.co/downloads/logstash/logstash-5.6.2.rpm
  
 
Once the package is downloaded, install it executing:
 
Once the package is downloaded, install it executing:
Line 113: Line 180:
  
 
Within logstash configuration, there are three configuration blocks:
 
Within logstash configuration, there are three configuration blocks:
* Input: Indicates how you get the information to logstash, format, port, and an identifier that is used to store information internally in elastic.
+
* Input: Indicates how information reaches logstash, format, port, and the identifier used to store information internally in Elastic.
 
* Filter: You can add a post-processing here, but in this case it is not necessary, so it will be left empty.
 
* Filter: You can add a post-processing here, but in this case it is not necessary, so it will be left empty.
* Output: Here comes the IP configuration and port where Elasticsearch will be listening. This is the place where the information processed by logstash will be saved.
+
* Output: Here comes the IP configuration and port where Elasticsearch will be listening. This is the place where the information processed by Logstash will be saved.
  
  
Line 146: Line 213:
 
Enter the server IP in the "host" parameter, instead of “0.0.0.0”.
 
Enter the server IP in the "host" parameter, instead of “0.0.0.0”.
  
The situation is very similar in the case of the "logstash-sample.conf" file, where the server's IP must be entered in the "localhost" parameter.
+
The situation is very similar in the case of the "logstash-sample.conf" file, where the server IP must be entered in the "localhost" parameter.
  
 
Start the service:
 
Start the service:
Line 157: Line 224:
  
 
==== Configuration parameters in Pandora FMS Server ====
 
==== Configuration parameters in Pandora FMS Server ====
 +
 +
{{Warning|From Pandora FMS version 745 there is no need to configure the server configuration file, since all confinguration is set through the console when enabling loc collection.}}
  
 
You will need to add the following configuration to Pandora FMS Server configuration file (/etc/pandora/pandora_server.conf) so that Pandora FMS DataServer processes the log information.
 
You will need to add the following configuration to Pandora FMS Server configuration file (/etc/pandora/pandora_server.conf) so that Pandora FMS DataServer processes the log information.
Line 167: Line 236:
 
==== Pandora FMS SyslogServer ====
 
==== Pandora FMS SyslogServer ====
  
From the 717 version of Pandora FMS 7.0NG, a new component appeared: SyslogServer.
+
From Pandora FMS version 717, a new component appeared: SyslogServer.
  
This component allows Pandora to analyze the Syslog of the machine where it is located, analyzing its content and storing the references in the ElasticSearch server.
+
This component allows Pandora FMS to analyze the Syslog of the machine where it is located, analyzing its content and storing the references in the ElasticSearch server.
  
 
The main advantage of SyslogServer lies in complementing log unification. Based on the exportation characteristics of SYSLOG from Linux and Unix environments, SyslogServer allows to consult logs regardless of their origin, searching in a single common point (Pandora FMS console log viewer).
 
The main advantage of SyslogServer lies in complementing log unification. Based on the exportation characteristics of SYSLOG from Linux and Unix environments, SyslogServer allows to consult logs regardless of their origin, searching in a single common point (Pandora FMS console log viewer).
  
To enable this feature, enable it in the configuration, adding the following content to pandora_server. conf:
+
Syslog installation is done both in client and server and to execute it, launch the following command:
 +
 
 +
yum install rsyslog
 +
 
 +
Bear in mind once Syslog is installed on the computers you wish to work with, you need to access the configuration file to enable TCP and UDP input.
 +
 
 +
/etc/rsyslog.conf
 +
 
 +
After adjusting this, stop and restart the rsyslog service.
 +
 
 +
After the service runs again, check the ports to see whether port 514 can be accessed.
 +
 
 +
netstat -ltnp
 +
 
 +
After enabling the service and checking the ports, configure the client so that it sends logs to the server. To that end, go to the rsyslog configuration file once more.
 +
 
 +
/etc/rsyslog.conf
 +
 
 +
Locate and enable the line that allows to configure the remote host. Specify what you wish to send, which will look as follows:
 +
 
 +
*.* @@remote-host:514
 +
 
 +
{{Tip|Log sending generates a container agent with the client name, so it is recommended to create agents with “alias as name” matching the client's hostname avoiding agent duplication.}}
 +
 
 +
For more information about rsyslog configuration, visit their official website: https://www.rsyslog.com/
 +
 
 +
To enable this feature, enable it in the configuration, adding the following content to pandora_server. configuration:
  
  
Line 195: Line 290:
  
 
'''syslog_max''' It is the maximum processing window for SyslogServer, it will be the maximum number of SYSLOG entries that will be processed in each iteration.
 
'''syslog_max''' It is the maximum processing window for SyslogServer, it will be the maximum number of SYSLOG entries that will be processed in each iteration.
 +
 +
{{Warning|It is necessary to modify the configuration of your device so that logs are sent to Pandora FMS server.}}
  
 
==== Recommendations ====
 
==== Recommendations ====
Line 200: Line 297:
 
===== Log rotation for Elasticsearch and Logstash =====
 
===== Log rotation for Elasticsearch and Logstash =====
  
'''Important:''' It is recommended to create a new entry for daemon rotation logs in en /etc/logrotate.d, to prevent Elasticsearch or LogStash logs from endlessly growing:
+
'''Important:''' It is recommended to create a new entry for daemon rotation logs in /etc/logrotate.d, to prevent Elasticsearch or LogStash logs from endlessly growing:
 +
 
 
  cat > /etc/logrotate.d/elastic <<EOF
 
  cat > /etc/logrotate.d/elastic <<EOF
 
  /var/log/elastic/elaticsearch.log
 
  /var/log/elastic/elaticsearch.log
Line 217: Line 315:
 
===== Index Purging =====
 
===== Index Purging =====
  
You can check at any time the list of indexes and their size by launching a cURL petition against its ElasticSearch server:
+
You may check at any time the list of indexes and their size by launching a cURL petition against its ElasticSearch server:
  
 
  curl -q <nowiki>http://elastic:9200/_cat/indices</nowiki>?
 
  curl -q <nowiki>http://elastic:9200/_cat/indices</nowiki>?
Line 227: Line 325:
 
  curl -q -XDELETE <nowiki>http://elastic:9200/logstash-2017.09.06</nowiki>
 
  curl -q -XDELETE <nowiki>http://elastic:9200/logstash-2017.09.06</nowiki>
  
Where "elastic" is the server's IP, and "logstash-2017.09.06" is the output file of the previous command.
+
Where "elastic" is the server's IP, and "{index-name}" is the output file of the previous command.
  
 
This will free up the space used by the removed index.
 
This will free up the space used by the removed index.
Line 253: Line 351:
  
 
* Days to purge: To prevent the size of the system, you can define a maximum number of days in which the log information will be stored, from that date they will be automatically deleted in Pandora FMS cleaning process.
 
* Days to purge: To prevent the size of the system, you can define a maximum number of days in which the log information will be stored, from that date they will be automatically deleted in Pandora FMS cleaning process.
 +
 +
=== Elasticsearch Interface ===
 +
 +
From Pandora FMS version 747 on, the '''Elastic Search interface''' is available, where we can make changes in our configuration through the templates.
 +
 +
 +
<br><center>
 +
[[image:ES_Interface.png|800px]]
 +
<br></center>
 +
 +
In the default configuration, Pandora generates an index per day, which Elastics is in charge of fragmenting and distributing in such a way that when we search for something, Elastic knows where to find the search or fragment.
 +
 +
For this search to be optimal by default, Elastics generates an index for each search, so '''we must configure in our environment as many searches as we have Elastics nodes'''.
 +
 +
These searches and replicas '''are configured when an index is created''', that Pandora generates automatically, so to modify this configuration we should use the templates.
 +
 +
==== Templates of Elasticsearch ====
 +
 +
{{warning|Templates are settings that are only applied at the time of index creation. Changing a template will have no impact on existing indexes.}}
 +
 +
To create a '''basic template''', we only have to define the fields:
 +
 +
{
 +
  "index_patterns": ["pandorafms*"],
 +
  "settings": {
 +
    "number_of_shards": 1,
 +
    "auto_expand_replicas" : "0-1",
 +
    "number_of_replicas" : "0"
 +
  },
 +
"mappings" : {
 +
      "properties" : {
 +
        "agent_id" : {
 +
          "type" : "long",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "group_id" : {
 +
          "type" : "long",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "group_name" : {
 +
          "type" : "text",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "logcontent" : {
 +
          "type" : "text",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "source_id" : {
 +
          "type" : "text",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "suid" : {
 +
          "type" : "text",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "type" : {
 +
          "type" : "text",
 +
          "fields" : {
 +
            "keyword" : {
 +
              "type" : "keyword",
 +
              "ignore_above" : 256
 +
            }
 +
          }
 +
        },
 +
        "utimestamp" : {
 +
          "type" : "long"
 +
        }
 +
      }
 +
    }
 +
  }
 +
}
 +
 +
On the other hand, if we want to '''define a multi-node template''' there are several things we must take into account.
 +
 +
When we realize the configuration of our template(JSON), we must have in account the '''configure as many searches as nodes we have''', nevertheless to configure correctly '''the replicas we must subtract 1 to the number of nodes''' that our environment has.
 +
 +
In this way, in an Elasticsearch environment with Pandora in which we have configured 3 nodes, when we modify the '''"number_of_search"''' and '''"number_of_replicas"''' fields should be in the following way:
 +
 +
{
 +
  "index_patterns": ["pandorafms*"],
 +
  "settings": {
 +
    '''"number_of_shards"''': 3,
 +
    "auto_expand_replicas" : "0-1",
 +
    '''"number_of_replicas"''' : "2"
 +
  },
 +
 +
We can perform these operations through the Elastic Search interface in Pandora FMS using the native Elastics Search commands.
 +
 +
*'''PUT _template/nombredeltemplate''': allows you to enter the data from our template.
 +
*'''GET _template/nombredeltemplate''': allows to see the template
 +
 +
<br><center>
 +
[[image:GetInterface.png|800px]]
 +
<br></center>
  
 
== Migration to LogStash + Elasticsearch system ==
 
== Migration to LogStash + Elasticsearch system ==
Line 259: Line 481:
  
  
To migrate it to the new system, run the following script that can found in /usr/share/pandora_server/util/
+
To migrate it to the new system, run the following script that can be found in /usr/share/pandora_server/util/
  
  
Line 267: Line 489:
 
== Display and Search ==
 
== Display and Search ==
  
In a log collection tool, two things are the main concerns: looking for information, filtering by date, data sources and/or keywords, and seeing that information drawn in occurrences by time unit. In this example, all log messages from all sources in the last hour are looked for:
+
In a log collecting tool, two things are the main concerns: looking for information, filtering by date, data sources and/or keywords, and seeing that information drawn in occurrences by time unit. In this example, all log messages from all sources in the last hour are looked for:
  
 
<br><center>
 
<br><center>
[[image:Logs3.JPG|850px]]
+
[[image:LogsVistaNew.png|850px]]
 
<i>View of occurrences over time</i>
 
<i>View of occurrences over time</i>
 
<br></center>
 
<br></center>
Line 277: Line 499:
 
<br>
 
<br>
 
There is a series of filters that can be used to display information:  
 
There is a series of filters that can be used to display information:  
*Filter by search type: it searches by exact match, all words or any word.
+
*Filter by search type: it searches by exact match all words or any word.
 
* Filter by message content: it searches the desired text in the content of the message.
 
* Filter by message content: it searches the desired text in the content of the message.
* Filter by log source (source id)
+
* Filter by log source (source id).
 
* Agent Filter: it narrows down the search results to those generated by the selected agent.
 
* Agent Filter: it narrows down the search results to those generated by the selected agent.
* Filter per group: it limits the selection of agents in the agent filter
+
* Filter by group: it limits the selection of agents in the agent filter.
* Filter by date
+
* Filter by date.
  
<br><center>
 
[[image:Logs4.JPG|850px]]
 
<br></center>
 
  
The most important and useful field will be the search string (search on the screenshot). This can be a simple text string, as in the previous case or a wildcard, in the following example, an IP address:
+
The most important and useful field will be the search string (search on the screenshot). This can be a simple text string, as in the previous case or a wildcard expression, as for example an IP address:
  
 
  192.168*
 
  192.168*
  
 
<b>Note</b>: Searches should be done using complete words or beginning sub-strings of the search words.
 
<b>Note</b>: Searches should be done using complete words or beginning sub-strings of the search words.
For example
+
For example:
  
 
  192.168.80.14
 
  192.168.80.14
Line 304: Line 523:
  
 
<br><center>
 
<br><center>
[[image:Logs5.JPG|850px]]
+
[[image:LogsVistaNew2.png|850px]]
 
<br></center>
 
<br></center>
  
Line 310: Line 529:
  
 
<br><center>
 
<br><center>
[[image:Logs6.JPG|850px]]
+
[[image:LogsVistaNew4.png|850px]]
 
<br></center>
 
<br></center>
  
Line 316: Line 535:
  
 
<br><center>
 
<br><center>
[[image:Logs7.JPG|850px]]
+
[[image:LogsVistaNew5.png|850px]]
 
<br></center>
 
<br></center>
  
If the option to see the context of the filtered content is checked, the result will be an overview of the situation:
+
If the option to see the context of the filtered content is checked, the result will be an overview of the situation with information about other log lines related to your search:
  
 
<br><center>
 
<br><center>
[[image:Logs8.JPG|850px]]
+
[[image:LogsVistaNew3.png|850px]]
 
<br></center>
 
<br></center>
  
=== Visualización y búsqueda avanzadas ===
+
=== Display and advanced search ===
  
A partir de Pandora FSM 7.0NG OUM727 están disponibles las opciones avanzadas para visualización de datos de log.
+
Log data display advanced options are available from Pandora FSM 7.0NG OUM727.
  
Con esta característica podremos graficar las entradas de log, clasificando la información en base a '''modelos de captura de datos'''.
+
With this feature, log entries can be turned into a graphic, sorting out the information according to '''data capture templates'''.
  
Estos modelos de captura de datos son básicamente expresiones regulares e identificadores, que nos permitirán analizar los orígenes de datos y mostrarlos como un gráfico.
+
These data capture templates are basically regular expressions and identifiers, that allow analyzing data sources and showing them as a graphic.
  
  
Para acceder a las opciones avanzadas pulse en ''Advanced options''. Se mostrará un formulario donde podrá elegir el tipo de vista de resultados:
+
To access advanced options, press ''Advanced options''. A form, where the result view type can be chosen, will appear:
  
- Mostrar entradas de log (texto plano).
+
- Show log entries (plain text).
- Mostrar gráfica de log.
+
- Show log graphic.
  
 
<center>
 
<center>
Line 343: Line 562:
 
</center>
 
</center>
  
Bajo la opción ''mostrar gráfica de log'' podemos seleccionar el modelo de captura.  
+
Under the ''show log graphic'' option, the capture template can be selected.  
  
El modelo por defecto, ''Apache log model'', ofrece la posibilidad de parsear logs de Apache en formato estándar (access_log), pudiendo extraer gráficas comparativas de tiempo de respuesta, agrupando por página visitada y código de respuesta:
+
The ''Apache log model'' template by default offers the possibility of parsing Apache logs in standard format (access_log), enabling retrieving time response comparative graphics, sorting by visited site and response code:
  
 
<center>
 
<center>
Line 351: Line 570:
 
</center>
 
</center>
  
Al pulsar en el botón de editar, editaremos el modelo de captura seleccionado. Con el botón de crear agregaremos un nuevo modelo de captura.
+
By pressing the edit button, the selected capture template is edited. With the create button, a new capture template is added.
  
  
Line 360: Line 579:
  
  
En el formulario que aparece, podremos elegir:
+
In the form, the following can be chosen:
  
;Título: un nombre para el modelo de captura.
+
;Title: capture template name.
;Una expresión regular de captura de datos: cada campo a extraer se identifica con la subexpresión entre los paréntesis ''(expresión a capturar)''.  
+
;A data capture regular expression: each field to be retrieved is identified with a subexpression between brackets ''(expression to be captured)''.  
;Los campos: en el orden en que los hemos capturado con la expresión regular. Los resultados se agruparán por la concatenación de los campos clave, que son aquellos cuyo nombre no esté entre guiones bajos:
+
;Field: the order in which they have been captured through the regular expression. The results will be sorted by key field concatenation, those whose name is not written between underscores:
  
  clave, _valor_
+
  key, _value_
  
  
  clave1,clave2,_valor_
+
  key,key2,_value_
  
  
  clave1,_valor_,clave2
+
  key1,_value_,key2
  
  
''Observación:'' Si no especificamos un campo valor, será automáticamente el conteo de apariciones que coinciden con la expresión regular.
+
''Comments:'' If the value field is not specified, it will be the number of regular expression matches automatically.
  
''Observación 2:'' Si especificamos una columna ''valor'' podremos elegir entre representar el valor acumulado (comportamiento por defecto) o marcar el checkbox para representar el promedio.
+
''Comments 2:'' If a ''value'' column is specified, you may choose either representing the accumulated value (performance by default) or checking the checkbox to represent the average.
  
''Ejemplo''
+
''Example''
  
Si quisiéramos extraer entradas de un log con el siguiente formato:
+
If log entries must be retrieved with the following format:
  
 
  Sep 19 12:05:01 nova systemd: Starting Session 6132 of user root.
 
  Sep 19 12:05:01 nova systemd: Starting Session 6132 of user root.
Line 387: Line 606:
  
  
Para contar el número de veces que se ha iniciado sesión, agrupando por usuario, usaremos:
+
To count the number of loins by user, use:
  
  
Expresión regular
+
Regular expression
  
 
  Starting Session \d+ of user (.*?)\.
 
  Starting Session \d+ of user (.*?)\.
  
  
Campos:
+
Fields:
  
 
  username
 
  username
  
  
Este modelo de captura nos devolverá el número de inicios de sesión por usuario del intervalo de tiempo que seleccionemos.
+
This capture template will return the number of logins by user during the selected time range.
  
  
Line 407: Line 626:
 
</center>
 
</center>
  
== Configuring agents ==
+
== Agent configuration ==
  
The log collection is done by agents, both Windows and Unix agents (Linux, MacOsX, Solaris, HP-UX, AIX, BSD, etc). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.
+
Log collection is done by both Windows and Unix agents (Linux, MacOsX, Solaris, HP-UX, AIX, BSD, etc). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.
  
 
Here are two examples to capture log information on windows and Unix:
 
Here are two examples to capture log information on windows and Unix:
Line 435: Line 654:
  
 
This new syntax only understands the agent version 5.0, so update the agents if you want to use this new enterprise feature.
 
This new syntax only understands the agent version 5.0, so update the agents if you want to use this new enterprise feature.
 +
 +
 +
 +
{{Warning|To define log modules in Windows it will be necessary to do it in the agent configuration file. If these modules are created directly in the console, the modules will be not initialized.}}
  
 
=== Unix Systems ===
 
=== Unix Systems ===
Line 443: Line 666:
  
 
Similar to the parsing logs plugin (grep_log), grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the \.\* regular expression (In this case "all") as the pattern when choosing which lines will be sent and which ones will not.
 
Similar to the parsing logs plugin (grep_log), grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the \.\* regular expression (In this case "all") as the pattern when choosing which lines will be sent and which ones will not.
 +
 +
== Log Source on Agent View ==
 +
 +
From Pandora FMS version 749, a box called 'Log sources status' has been added in the Agent View, in which the date of the last log update by that agent will appear. By clicking on the Review magnifying glass icon, we will be redirected to the Log Viewer view filtered by that log.
 +
 +
<center>
 +
[[Image: agent_view_log.png|800px]]
 +
</center>
  
  

Latest revision as of 11:45, 11 September 2020

Go back Pandora FMS documentation index

1 Log Collection

1.1 Introduction

Up to now, Pandora FMS did not provide a solution to this problem, but with version 5.0, Pandora FMS Enterprise offers a solution to manage hundreds of megabytes of daily data. This solution allows you to reuse the same monitoring agents for specific log data collection, using a syntax very similar to the current one for log monitoring.

Log monitoring in Pandora FMS is approached in two different ways:

  1. Based on modules: it represents logs in Pandora as asynchronous monitors, being able to associate alerts to the detected inputs that fulfill a series of preconfigured conditions by the user. The modular representation of the logs allows you to:
    1. Create modules that count the occurrences of a regular expression in a log.
    2. Obtain the lines and context of log messages
  2. Based on combined display: it allows the user to view in a single console all the information from logs of multiple origins that you may want to capture, organizing the information sequentially using the timestamp in which the logs were processed.

From version 7.0NG 712, Pandora FMS incorporates ElasticSearch to store log information, which implies a significative performance improvement.

1.2 How it works

The process is simple:



LogsEsquema.png



  • The logs analyzed by the agents (eventlog or text files) are forwarded to Pandora Server in RAW form within the XML reporting agent:
  • Pandora server (DataServer) receives the XML agent, which contains information about both monitoring and logs.
  • When the DataServer processes XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log, automatically sending information to ElasticSearch in order to be stored.
  • Pandora FMS stores the data in Elasticsearch indexes generating a daily index for each Pandora FMS instance.
  • Pandora FMS server has a maintenance task that deletes indexes in the interval defined by the system admin (90 days by default).

1.3 Configuration

1.3.1 Server Configuration

The new storage log system,based on ElasticSearch requires configuring several components.

Template warning.png

From Pandora FMS version 745 onwards, there is no need to use LogStash, since the Pandora FMS server communicates directly with ElasticSearch, so LogStash related configurations do not need to be applied.

 


1.3.1.1 Server Requirements

Each component (Pandora FMS Server, Elasticsearch) can be distributed on separate servers.

If you choose to place Elasticsearch and LogStash on the same server these are recommended:

  • Centos 7.
  • At least 4GB of RAM, although 6GB of RAM are recommended for each ElasticSearch instance.
  • At least 2 CPU cores
  • At least 20GB of disk space for the system.
  • At least 50GB of disk space for ElasticSearch data (the amount can be different depending on the amount of data to be stored).
  • Connectivity wfrom Pandora FMS server to Elasticsearch API (port 9200/TCP by default).

1.3.1.2 Installing and configuring ElasticSearch

Before you begin installing these components, install Java on the machine:

yum install java

Once installed, install Elasticsearch following the official documentation: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/install-elasticsearch.html

When installing in CentOS/Red Hat systems, the recommended installation is by means of rpm: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/rpm.html


Configure the service:

Configure network options and ‘’optionally’’ data locations (and logs from Elasticsearch itself) in the configuration file located at /etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Network -----------------------------------
# Set a custom port for HTTP:
http.port: 9200
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by a comma):
path.data: /var/lib/elastic
# Path to log files:
path.logs: /var/log/elastic

Uncomment and define the following lines as follows: Enter the server's IP in the network.host parameter.

cluster.name: elkpandora
node.name: ${HOSTNAME}
bootstrap.memory_lock: true
network.host: ["127.0.0.1", “IP"]
  • cluster.name: Cluster name.
  • node.name: To name the node, with ${HOSTNAME} it will take that of the host.
  • bootstrap.memory_lock: It must always be "true".
  • network.host: Server IP.

If we are working with just one node, it will be necessary to add the following line:

discovery.type: single-node

If we are working with a cluster, we will need to complete the discovery.seed_hosts parameter.

discover.seed_hosts : ["ip", "ip", "ip"]

Or:

discovery.seed_hosts:
 - 192.168.1.10:9300
 - 192.168.1.11
 - seeds.mydomain.com

The options of the resources allocated to ElasticSearch must be adapted, adjusting the parameters available in the configuration file located at /etc/elasticsearch/jvm.options. Use at least 2GB in XMS.

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m

The resources will be assigned according to the use of ElasticSearch. It is recommended to follow the official ElasticSearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

It is necessary to modify the parameter memlock unlimited in ElasticSearch configuration file.

The path to the file is:

/usr/lib/systemd/system/elasticsearch.service

Where we will need to add the following parameter:

MAX_LOCKED_MEMORY=unlimited

Once finished, it will be necessary to run the following command:

systemctl daemon-reload && systemctl restart elasticsearch

The command to start the service is:

systemctl start elasticsearch

Start the service:

systemctl start elasticsearch

Note: If the service fails to start, check the logs located at /var/log/elasticsearch/

To check ElasticSearch installation, just execute the following command:

curl -q http://{IP}:9200/

Which should return an output similar to this one:

{
  "name" : "3743885b95f9",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "7oJV9hXqRwOIZVPBRbWIYw",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}



It is advised to visit the link to ElasticSearc best practices for production environments: https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config.html#dev-vs-prod



1.3.1.3 Installing and configuring LogStash

Template warning.png

From Pandora FMS version 745 onwards, there is no need to install LogStash.

 


Install LogStash 5.6.2 from the downloadable RPM from the Elasticsearch project website: https://artifacts.elastic.co/downloads/logstash/logstash-5.6.2.rpm

Once the package is downloaded, install it executing:

rpm -i logstash-X.X.X.rpm

Configure the service

Within logstash configuration, there are three configuration blocks:

  • Input: Indicates how information reaches logstash, format, port, and the identifier used to store information internally in Elastic.
  • Filter: You can add a post-processing here, but in this case it is not necessary, so it will be left empty.
  • Output: Here comes the IP configuration and port where Elasticsearch will be listening. This is the place where the information processed by Logstash will be saved.


Configuration file:

/etc/logstash/conf.d/logstash.conf


Example of a configuration file:

# This input block will listen on port 10514 for logs to come in.
# host should be an IP on the Logstash server.
# codec => "json" indicates that the lines received are expected to be in JSON format
# type => "rsyslog" is an optional identifier to help identify messaging streams in the pipeline.
input {
 tcp {
    host  => "0.0.0.0"
    port  => 10516
    codec => "json"
    type  => "pandora_remote_log_entry"
 }
}
# This is an empty filter block. You may later add other filters here to further process
# your log lines
filter { }
output {
  elasticsearch { hosts => ["0.0.0.0:9200"] }
}

Enter the server IP in the "host" parameter, instead of “0.0.0.0”.

The situation is very similar in the case of the "logstash-sample.conf" file, where the server IP must be entered in the "localhost" parameter.

Start the service:

systemctl start logstash

Note: If you try to install LogStash in Centos 6 despite our recommendation, you can start it with the following command:

initctl start logstash

1.3.1.4 Configuration parameters in Pandora FMS Server

Template warning.png

From Pandora FMS version 745 there is no need to configure the server configuration file, since all confinguration is set through the console when enabling loc collection.

 


You will need to add the following configuration to Pandora FMS Server configuration file (/etc/pandora/pandora_server.conf) so that Pandora FMS DataServer processes the log information.

Important: Any log that reaches pandora without having this configuration active, will be discarded.

logstash_host eli.artica.lan
logstash_port 10516

1.3.1.5 Pandora FMS SyslogServer

From Pandora FMS version 717, a new component appeared: SyslogServer.

This component allows Pandora FMS to analyze the Syslog of the machine where it is located, analyzing its content and storing the references in the ElasticSearch server.

The main advantage of SyslogServer lies in complementing log unification. Based on the exportation characteristics of SYSLOG from Linux and Unix environments, SyslogServer allows to consult logs regardless of their origin, searching in a single common point (Pandora FMS console log viewer).

Syslog installation is done both in client and server and to execute it, launch the following command:

yum install rsyslog

Bear in mind once Syslog is installed on the computers you wish to work with, you need to access the configuration file to enable TCP and UDP input.

/etc/rsyslog.conf

After adjusting this, stop and restart the rsyslog service.

After the service runs again, check the ports to see whether port 514 can be accessed.

netstat -ltnp

After enabling the service and checking the ports, configure the client so that it sends logs to the server. To that end, go to the rsyslog configuration file once more.

/etc/rsyslog.conf

Locate and enable the line that allows to configure the remote host. Specify what you wish to send, which will look as follows:

*.* @@remote-host:514

Info.png

Log sending generates a container agent with the client name, so it is recommended to create agents with “alias as name” matching the client's hostname avoiding agent duplication.

 


For more information about rsyslog configuration, visit their official website: https://www.rsyslog.com/

To enable this feature, enable it in the configuration, adding the following content to pandora_server. configuration:


# Enable (1) or disable (0) the Pandora FMS Syslog Server (PANDORA FMS ENTERPRISE ONLY).
syslogserver 1
# Full path to syslog's output file (PANDORA FMS ENTERPRISE ONLY).
syslog_file /var/log/messages
# Number of threads for the Syslog Server (PANDORA FMS ENTERPRISE ONLY).
syslog_threads 2
# Maximum number of lines queued by the Syslog Server's producer on each run (PANDORA FMS ENTERPRISE ONLY).
syslog_max 65535


A LogStash/ElasticSearch server must be enabled and configured. Review the preceding points to learn how to configure it.

syslogserver Boolean, enables (1) or disables (0) the local SYSLOG analysis engine.

syslog_file Location of the file where the SYSLOG entries are delivered.

syslog_threads Maximum number of threads to be used in the SyslogServer producer/consumer system.

syslog_max It is the maximum processing window for SyslogServer, it will be the maximum number of SYSLOG entries that will be processed in each iteration.

Template warning.png

It is necessary to modify the configuration of your device so that logs are sent to Pandora FMS server.

 


1.3.1.6 Recommendations

1.3.1.6.1 Log rotation for Elasticsearch and Logstash

Important: It is recommended to create a new entry for daemon rotation logs in /etc/logrotate.d, to prevent Elasticsearch or LogStash logs from endlessly growing:

cat > /etc/logrotate.d/elastic <<EOF
/var/log/elastic/elaticsearch.log
/var/log/logstash/logstash-plain.log {
       weekly
       missingok
       size 300000
       rotate 3
       maxage 90
       compress
       notifempty
       copytruncate
}
EOF
1.3.1.6.2 Index Purging

You may check at any time the list of indexes and their size by launching a cURL petition against its ElasticSearch server:

curl -q http://elastic:9200/_cat/indices?

Where "elastic" is the server's IP.

To remove any of these indexes, execute the DELETE command:

curl -q -XDELETE http://elastic:9200/logstash-2017.09.06

Where "elastic" is the server's IP, and "{index-name}" is the output file of the previous command.

This will free up the space used by the removed index.

1.3.2 Console Settings

To enable the log system display, enable the following configuration:


Logs1.JPG


Then set the log viewer performance in the 'Log Collector' tab:


Logs2.JPG


On this screen configure:

  • IP or FQDN address of the server that hosts the Elasticsearch service
  • Port through which the service is being given to Elasticsearch
  • Number of logs being shown. To speed up the response of the console, record dynamic loading has been added. To use this, the user must scroll to the bottom of the page, forcing the loading of the next set of available records. The size of these groups can be set in this field as the number of records per group.
  • Days to purge: To prevent the size of the system, you can define a maximum number of days in which the log information will be stored, from that date they will be automatically deleted in Pandora FMS cleaning process.

1.3.3 Elasticsearch Interface

From Pandora FMS version 747 on, the Elastic Search interface is available, where we can make changes in our configuration through the templates.



ES Interface.png


In the default configuration, Pandora generates an index per day, which Elastics is in charge of fragmenting and distributing in such a way that when we search for something, Elastic knows where to find the search or fragment.

For this search to be optimal by default, Elastics generates an index for each search, so we must configure in our environment as many searches as we have Elastics nodes.

These searches and replicas are configured when an index is created, that Pandora generates automatically, so to modify this configuration we should use the templates.

1.3.3.1 Templates of Elasticsearch

Template warning.png

Templates are settings that are only applied at the time of index creation. Changing a template will have no impact on existing indexes.

 


To create a basic template, we only have to define the fields:

{
 "index_patterns": ["pandorafms*"],
 "settings": {
   "number_of_shards": 1,
   "auto_expand_replicas" : "0-1",
   "number_of_replicas" : "0"
 },
"mappings" : {
     "properties" : {
       "agent_id" : {
         "type" : "long",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "group_id" : {
         "type" : "long",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "group_name" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "logcontent" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "source_id" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "suid" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "type" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "utimestamp" : {
         "type" : "long"
       }
     }
   }
 }
}

On the other hand, if we want to define a multi-node template there are several things we must take into account.

When we realize the configuration of our template(JSON), we must have in account the configure as many searches as nodes we have, nevertheless to configure correctly the replicas we must subtract 1 to the number of nodes that our environment has.

In this way, in an Elasticsearch environment with Pandora in which we have configured 3 nodes, when we modify the "number_of_search" and "number_of_replicas" fields should be in the following way:

{
 "index_patterns": ["pandorafms*"],
 "settings": {
   "number_of_shards": 3,
   "auto_expand_replicas" : "0-1",
   "number_of_replicas" : "2"
 },

We can perform these operations through the Elastic Search interface in Pandora FMS using the native Elastics Search commands.

  • PUT _template/nombredeltemplate: allows you to enter the data from our template.
  • GET _template/nombredeltemplate: allows to see the template

GetInterface.png


1.4 Migration to LogStash + Elasticsearch system

After setting the new log storage system, migrate all data previously stored in Pandora FMS to the new system, in a distributed way among the directories.


To migrate it to the new system, run the following script that can be found in /usr/share/pandora_server/util/


# Migrate Log Data < 7.0NG 712 to >= 7.0NG 712
/usr/share/pandora_server/util/pandora_migrate_logs.pl /etc/pandora/pandora_server.conf

1.5 Display and Search

In a log collecting tool, two things are the main concerns: looking for information, filtering by date, data sources and/or keywords, and seeing that information drawn in occurrences by time unit. In this example, all log messages from all sources in the last hour are looked for:


LogsVistaNew.png View of occurrences over time




There is a series of filters that can be used to display information:

  • Filter by search type: it searches by exact match all words or any word.
  • Filter by message content: it searches the desired text in the content of the message.
  • Filter by log source (source id).
  • Agent Filter: it narrows down the search results to those generated by the selected agent.
  • Filter by group: it limits the selection of agents in the agent filter.
  • Filter by date.


The most important and useful field will be the search string (search on the screenshot). This can be a simple text string, as in the previous case or a wildcard expression, as for example an IP address:

192.168*

Note: Searches should be done using complete words or beginning sub-strings of the search words. For example:

192.168.80.14
192.168*
Warning in somelongtext
Warning in some*

One of the three types of search must be selected:

  • Exact match: Literal string search.

LogsVistaNew2.png


  • All words: Search of all the indicated words, regardless of the order, taking into account that each word is separated by spaces.

LogsVistaNew4.png


  • Any word: Search of any indicated word, regardless of the order, taking into account that each word is separated by spaces.

LogsVistaNew5.png


If the option to see the context of the filtered content is checked, the result will be an overview of the situation with information about other log lines related to your search:


LogsVistaNew3.png


1.5.1 Display and advanced search

Log data display advanced options are available from Pandora FSM 7.0NG OUM727.

With this feature, log entries can be turned into a graphic, sorting out the information according to data capture templates.

These data capture templates are basically regular expressions and identifiers, that allow analyzing data sources and showing them as a graphic.


To access advanced options, press Advanced options. A form, where the result view type can be chosen, will appear:

- Show log entries (plain text). - Show log graphic.

Graph log.png

Under the show log graphic option, the capture template can be selected.

The Apache log model template by default offers the possibility of parsing Apache logs in standard format (access_log), enabling retrieving time response comparative graphics, sorting by visited site and response code:

Graph log2.png

By pressing the edit button, the selected capture template is edited. With the create button, a new capture template is added.


Graph log3.png


In the form, the following can be chosen:

Title
capture template name.
A data capture regular expression
each field to be retrieved is identified with a subexpression between brackets (expression to be captured).
Field
the order in which they have been captured through the regular expression. The results will be sorted by key field concatenation, those whose name is not written between underscores:
key, _value_


key,key2,_value_


key1,_value_,key2


Comments: If the value field is not specified, it will be the number of regular expression matches automatically.

Comments 2: If a value column is specified, you may choose either representing the accumulated value (performance by default) or checking the checkbox to represent the average.

Example

If log entries must be retrieved with the following format:

Sep 19 12:05:01 nova systemd: Starting Session 6132 of user root.
Sep 19 12:05:01 nova systemd: Starting Session 6131 of user root.


To count the number of loins by user, use:


Regular expression

Starting Session \d+ of user (.*?)\.


Fields:

username


This capture template will return the number of logins by user during the selected time range.


Graph log4.png

1.6 Agent configuration

Log collection is done by both Windows and Unix agents (Linux, MacOsX, Solaris, HP-UX, AIX, BSD, etc). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.

Here are two examples to capture log information on windows and Unix:

1.6.1 Windows

module_begin
module_name Eventlog_System
module_type log
module_logevent
module_source System
module_end 
module_begin
module_name PandoraAgent_log
module_type log
module_regexp C:\archivos de programa\pandora_agent\pandora_agent.log
module_description This module will return all lines from the specified logfile
module_pattern .*
module_end

In both cases, the only difference from monitoring module to the definition of a log source is:

module_type log 

This new syntax only understands the agent version 5.0, so update the agents if you want to use this new enterprise feature.


Template warning.png

To define log modules in Windows it will be necessary to do it in the agent configuration file. If these modules are created directly in the console, the modules will be not initialized.

 


1.6.2 Unix Systems

In Unix, a new plugin that comes with agent version 5.0 is used. Its syntax is simple:

module_plugin grep_log_module /var/log/messages Syslog \.\*

Similar to the parsing logs plugin (grep_log), grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the \.\* regular expression (In this case "all") as the pattern when choosing which lines will be sent and which ones will not.

1.7 Log Source on Agent View

From Pandora FMS version 749, a box called 'Log sources status' has been added in the Agent View, in which the date of the last log update by that agent will appear. By clicking on the Review magnifying glass icon, we will be redirected to the Log Viewer view filtered by that log.

Agent view log.png


Go back to Pandora FMS documentation index