Pandora: Documentation en: Log Monitoring

From Pandora FMS Wiki
Jump to: navigation, search

Go back to Pandora FMS documentation index


1 Log Collection

1.1 Introduction

Info.png

Versión Enterprise.
Version 5.0 or higher.

 


Log monitoring in Pandora FMS is approached in two different ways:

  1. Based on modules: it represents logs in Pandora as asynchronous monitors, being able to associate alerts to the detected inputs that fulfill a series of preconfigured conditions by the user. The modular representation of the logs allows you to:
    1. Create modules that count the occurrences of a regular expression in a log.
    2. Obtain the lines and context of log messages
  2. Based on combined display: it allows the user to view in a single console all the information from logs of multiple origins that you may want to capture, organizing the information sequentially using the timestamp in which the logs were processed.

From version 7.0NG 712, Pandora FMS incorporates ElasticSearch to store log information, which implies a significative performance improvement.

1.2 How it works



LogsEsquema.png



  • The logs analyzed by the software agents (eventlog or text files) are forwarded to Pandora Server in RAW form within the XML reporting agent:
  • Pandora FMS data server receives the XML agent, which contains information about both monitoring and logs.
  • When the DataServer processes XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log, automatically sending information to ElasticSearch in order to be stored.
  • Pandora FMS stores the data in Elasticsearch indexes generating a daily index for each Pandora FMS instance.
  • Pandora FMS server has a maintenance task that deletes indexes in the interval defined by the system admin (90 days by default).

1.3 Configuration

1.3.1 Server Configuration

The new storage log system,based on ElasticSearch requires configuring several components.


1.3.1.1 Server Requirements

Each component (Pandora FMS Server, Elasticsearch) can be distributed on separate servers.

  • CentOS 7.
  • At least 4GB of RAM, although 6GB of RAM are recommended for each ElasticSearch instance.
  • At least 2 CPU cores
  • At least 20GB of disk space for the system.
  • At least 50GB of disk space for ElasticSearch data (the amount can be different depending on the amount of data to be stored).
  • Connectivity from Pandora FMS server to Elasticsearch API (port 9200/TCP by default).

1.3.1.2 Installing and configuring ElasticSearch

Before you begin installing these components, install Java on the machine:

yum install java

Once installed, install Elasticsearch following the official documentation; for Debian environments, it has its own instructions.


Configure the service:

Configure network options and ‘’optionally’’ data locations (and logs from Elasticsearch itself) in the configuration file located at /etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Network -----------------------------------
# Set a custom port for HTTP:
http.port: 9200
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by a comma):
path.data: /var/lib/elastic
# Path to log files:
path.logs: /var/log/elastic

Uncomment and define the following lines as follows: Enter the server's IP in the network.host parameter.

cluster.name: elkpandora
node.name: ${HOSTNAME}
bootstrap.memory_lock: true
network.host: ["127.0.0.1", “IP"]
  • cluster.name: Cluster name.
  • node.name: To name the node, with ${HOSTNAME} it will take that of the host.
  • bootstrap.memory_lock: It must always be "true".
  • network.host: Server IP.
    • If we are working with just one node, it will be necessary to add the following line:
discovery.type: single-node
    • If we are working with a cluster, we will need to complete the discovery.seed_hosts parameter.
discover.seed_hosts : ["ip", "ip", "ip"]

Or:

discovery.seed_hosts:
 - 192.168.1.10:9300
 - 192.168.1.11
 - seeds.mydomain.com

The options of the resources allocated to ElasticSearch must be adapted, adjusting the parameters available in the configuration file located at /etc/elasticsearch/jvm.options. Use at least 2GB in XMS.

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m

The resources will be assigned according to the use of ElasticSearch. It is recommended to follow the official documentation.

It is necessary to modify the parameter memlock unlimited located in the file /usr/lib/systemd/system/elasticsearch.service to add the following parameter:

MAX_LOCKED_MEMORY=unlimited

Once finished, run the following command:

systemctl daemon-reload && systemctl restart elasticsearch

The command to start the service is:

systemctl start elasticsearch

Info.png

If the service fails to start, check the logs located at /var/log/elasticsearch/

 


To check ElasticSearch installation, just execute the following command:

curl -q http://{IP}:9200/


Which should return an output similar to this one:

{
  "name" : "3743885b95f9",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "7oJV9hXqRwOIZVPBRbWIYw",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}



It is advised to visit the link to ElasticSearc best practices for production environments: https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config.html#dev-vs-prod



1.3.1.3 Pandora FMS SyslogServer

Info.png

Enterprise version.
Version NG 717 or higher.

 


This component allows Pandora FMS to analyze the syslog of the machine where it is located, analyzing its content and storing the references in the ElasticSearch server.

The main advantage of SyslogServer lies in complementing log unification. Based on the exportation characteristics of SYSLOG from Linux and Unix environments, SyslogServer allows to consult logs regardless of their origin, searching in a single common point (Pandora FMS console log viewer).

Syslog installation is done both in client and server and to execute it, launch the following command:

yum install rsyslog

Bear in mind once Syslog is installed on the computers you wish to work with, you need to access the configuration file /etc/rsyslog.conf to enable TCP and UDP input.

(...)

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

(...)


After adjusting this, stop and restart the rsyslog service. After the service runs again, check the ports to see whether port 514 can be accessed.

netstat -ltnp

For more information about rsyslog configuration, visit the official website.

Configure the client so that it sends logs to the Syslog server. To that end, go to the client rsyslog configuration file at /etc/rsyslog.conf and locate and enable the line that allows configuring the remote host.

*.* @@remote-host:514

Info.png

Log sending generates a container agent with the client name, so it is recommended to create agents with “alias as name” matching the client's hostname avoiding agent duplication.

 


To enable this feature in Pandora FMS Server, enable in the pandora_server.conf file the following content:


# Enable (1) or disable (0) the Pandora FMS Syslog Server
#  (PANDORA FMS ENTERPRISE ONLY).
syslogserver 1

# Full path to syslog's output file (PANDORA FMS ENTERPRISE ONLY).
syslog_file /var/log/messages

# Number of threads for the Syslog Server
#  (PANDORA FMS ENTERPRISE ONLY).
syslog_threads 2

# Maximum number of lines queued by the Syslog Server's 
#   producer on each run (PANDORA FMS ENTERPRISE ONLY).
syslog_max 65535

syslogserver Boolean, enables (1) or disables (0) the local SYSLOG analysis engine.

syslog_file Location of the file where the SYSLOG entries are delivered.

syslog_threads Maximum number of threads to be used in the SyslogServer producer/consumer system.

syslog_max It is the maximum processing window for SyslogServer, it will be the maximum number of SYSLOG entries that will be processed in each iteration.

Template warning.png

It is necessary to modify the configuration of your device so that logs are sent to Pandora FMS server.

 


1.3.1.4 Recommendations

1.3.1.4.1 Log rotation for Elasticsearch

Important: It is recommended to create a new entry for daemon rotation logs in /etc/logrotate.d, to prevent Elasticsearch logs from endlessly growing:

cat > /etc/logrotate.d/elastic <<EOF
/var/log/elastic/elaticsearch.log {
       weekly
       missingok
       size 300000
       rotate 3
       maxage 90
       compress
       notifempty
       copytruncate
}
EOF
1.3.1.4.2 Index Purging

You may check at any time the list of indexes and their size by launching a cURL petition against its ElasticSearch server:

curl -q http://elastic:9200/_cat/indices?

Where elastic is the server's IP.

To remove any of these indexes, execute the DELETE command:

curl -q -XDELETE http://elastic:9200/{index-name}

Where elastic is the server's IP, and {index-name} is the output file of the previous command. This will free up the space used by the removed index.

1.3.2 Console Settings

To enable the log system display, enable the following configuration:


Logs1.JPG


Then set the log viewer performance in the Configuration > Log Collector:


Logs2.JPG


On this screen configure:

  • IP or FQDN address of the server that hosts the Elasticsearch service
  • Number of logs being shown. To speed up the response of the console, record dynamic loading has been added. To use this, the user must scroll to the bottom of the page, forcing the loading of the next set of available records. The size of these groups can be set in this field as the number of records per group.
  • Days to purge old information': To prevent the size of the system, you can define a maximum number of days in which the log information will be stored, from that date they will be automatically deleted in Pandora FMS cleaning process.

1.3.3 Elasticsearch Interface

Info.png

Enterprise version.
Version NG 747 or higher.

 



ES Interface.png


In the default configuration, Pandora FMS generates an index per day, which Elastics is in charge of fragmenting and distributing in such a way that when you look for something, Elastic knows where to find the search or fragment.

For this search to be optimal, Elastics generates an index for each search by default, so you must configure in your environment as many searches as Elastics nodes you have.

These searches and replicas are configured when an index is created, that Pandora FMS generates automatically, so to modify this configuration you should use the templates.

1.3.3.1 Elasticsearch Templates

Template warning.png

Templates are settings that are only applied when the index is created. Changing a template will have no impact on existing indexes.

 


To create a basic template, you only have to define the fields:

{
 "index_patterns": ["pandorafms*"],
 "settings": {
   "number_of_shards": 1,
   "auto_expand_replicas" : "0-1",
   "number_of_replicas" : "0"
 },
"mappings" : {
     "properties" : {
       "agent_id" : {
         "type" : "long",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "group_id" : {
         "type" : "long",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "group_name" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "logcontent" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "source_id" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "suid" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "type" : {
         "type" : "text",
         "fields" : {
           "keyword" : {
             "type" : "keyword",
             "ignore_above" : 256
           }
         }
       },
       "utimestamp" : {
         "type" : "long"
       }
     }
   }
 }
}

On the other hand, if you want to define a multi-node template there are several things you must take into account.

When you configure your template(JSON), you must take into account the configure as many searches as nodes you have, however to configure correctly the replicas, subtract 1 to the number of nodes your environment has.

That way, in an Elasticsearch environment with Pandora FMS where you have configured 3 nodes, when you modify the number_of_search and number_of_replicas fields, they should look like this:

{
 "index_patterns": ["pandorafms*"],
 "settings": {
   "number_of_shards": 3,
   "auto_expand_replicas" : "0-1",
   "number_of_replicas" : "2"
 },

We can perform these operations through the Elastic Search interface in Pandora FMS using the native Elastics Search commands.

  • PUT _template/nombredeltemplate: It allows to enter the data from your template.
  • GET _template/nombredeltemplate: It allows to see the template.

GetInterface.png


1.4 Migration to Elasticsearch system

After setting the new log storage system, migrate all data previously stored in Pandora FMS to the new system, in a distributed way among the directories.


To migrate it to the new system, run the following script that can be found in /usr/share/pandora_server/util/


# Migrate Log Data < 7.0NG 712 to >= 7.0NG 712
/usr/share/pandora_server/util/pandora_migrate_logs.pl /etc/pandora/pandora_server.conf

1.5 Display and Search

In a log collecting tool, two things are the main concerns: looking for information, filtering by date, data sources and/or keywords, and seeing that information drawn in occurrences by time unit. In this example, all log messages from all sources in the last hour are looked for. Look at Search, Start date and End date:

Haga clic para ampliar
Vista de ocurrencias a lo largo del tiempo

The most important -and useful- field will be the string to look for to be entered in the Search text box, together with the three available Search modes.

Exact match
Literal string search, the log matches exactly.

Haga clic para ampliar

All words
Search that contains all the indicated words, regardless of the order in a single log line (bear in mind in the future that each word is separated by spacest).

Haga clic para ampliar

Any word
Search that contains some of the indicated words, regardless of the order.

Click to zoom in

If you check the option to see the context of the filtered content, you will get a general view of the situation with information from other log lines related to your search:

Click to zoom in

1.5.1 Display and advanced search

Info.png

Enterprise version.
Version NG 727 or higher.

 


With this feature, log entries can be turned into a graphic, sorting out the information according to data capture templates.

These data capture templates are basically regular expressions and identifiers, that allow analyzing data sources and showing them as a graphic.


To access advanced options, press Advanced options. A form, where the result view type can be chosen, will appear:

  • Show log entries (plain text).
  • Show log graphic.

Graph log.png

Under the show log graphic option, the capture template can be selected.

The Apache log model template by default offers the possibility of parsing Apache logs in standard format (access_log), enabling retrieving time response comparative graphics, sorting by visited site and response code:

Graph log2.png

By pressing the edit button Edit icon.png, the selected capture template is edited. With the create buttonNew icon.png, a new capture template is added.


Graph log3.png


In the form, the following can be chosen:

Capture regexp
A regular expression for data capture. Each field to be retrieved is identified with the sub expression between pbrackets(expression to capture).
Fields
Fields following the order in which they have been captured through the regular expression. The results will be sorted by key field concatenation, those whose name is not written between underscores:
key, _value_


key,key2,_value_


key1,_value_,key2


Comments: If the value field is not specified, it will be the number of regular expression matches automatically.

Comments 2: If a value column is specified, you may choose either representing the accumulated value (performance by default) or checking the checkbox to represent the average.

Example

If log entries must be retrieved with the following format:

Sep 19 12:05:01 nova systemd: Starting Session 6132 of user root.
Sep 19 12:05:01 nova systemd: Starting Session 6131 of user root.


To count the number of loins by user, use:


Regular expression

Starting Session \d+ of user (.*?)\.


Fields:

username


This capture template will return the number of logins by user during the selected time range.


Graph log4.png

1.6 Agent configuration

Log collection is done by both Windows and Unix agents (Linux®, MacOsX®, Solaris®, HPUX®, AIX®, BSD®, etc.). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.

Here are two examples to capture log information on windows and Unix:

1.6.1 Windows

From version 750 onwards this action can be done through the agent plugins by activating the Advanced option.

You will be able to perform executions of the type shown below:


Logchannel module

module_begin
module_name MyEvent
module_type async_string
module_logchannel
module_source <logChannel>
module_eventtype <event_type/level>
module_eventcode <event_id>
module_pattern <text substring to match>
module_description <description>
module_end


Logevent module

module_begin
module_name Eventlog_System
module_type log
module_logevent
module_source System
module_end 


Regexp module

module_begin
module_name PandoraAgent_log
module_type log
module_regexp C:\archivos de programa\pandora_agent\pandora_agent.log
module_description This module will return all lines from the specified logfile
module_pattern .*
module_end


For more information about the description of log type modules you can check the following section referring to specific Directives.

module_type log 

When defining this kind of tag, module_type log, you will be indicating that it is not stored in the database, but that it is sent to the log collector. Any module with this type of data will be sent to the collector if it is enabled, otherwise the information will be discarded.

Nota: This new syntax is valid for agents version 5.0 or higher. Remember to keep your Enterprise version updated.


1.6.2 Unix Systems

With agent version 5.0 is used, you may use the following syntax:

module_plugin grep_log_module /var/log/messages Syslog \.\*

Similar to the parsing logs plugin (grep_log), grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the \.\* regular expression (In this case "all") as the pattern when choosing which lines will be sent and which ones will not.

1.7 Log Source on Agent View

From Pandora FMS version 749 onwards, a box called Log sources status has been added to the Agent View, where the date of the last log update by that agent will appear. By clicking on the Review magnifying glass icon, you will be redirected to the Log Viewer view filtered by that log.

Agent view log.png


Go back to Pandora FMS documentation index