Difference between revisions of "Pandora: Documentation en: Log Monitoring"

From Pandora FMS Wiki
Jump to: navigation, search
m (Installing and configuring LogStash)
(Installing and configuring LogStash)
Line 140: Line 140:
  systemctl start logstash
  systemctl start logstash
'''Note''': If you are trying to install LogStash in Centos 6 despite our recommendation, you can start it with the following command:
initctl start logstash
==== Configuration parameters in Pandora FMS Server ====
==== Configuration parameters in Pandora FMS Server ====

Revision as of 09:34, 29 November 2017

Go back Pandora FMS documentation index

1 Log Collection

1.1 Introduction

Up to now, Pandora FMS didn't have a solution to this problem, but with version 5.0, Pandora FMS Enterprise offers a solution to manage hundreds of megabytes of daily data. This solution allows you to reuse the same monitoring agents for specific log data collection, using a syntax very similar to the current one for log monitoring.

Log monitoring in Pandora FMS is approached in two different ways:

  1. Based on modules: it represents logs in Pandora as asynchronous monitors, being able to associate alerts to the detected inputs that fulfill a series of preconfigured conditions by the user. The modular representation of the logs allows us to do this:
    1. Create modules that count the occurrences of a regular expression in a log.
    2. Obtain the lines and context of log messages
  2. Based on combined display: allows the user to view in a single console all the information from logs of multiple origins that you want to capture, organizing the information sequentially using the timestamp in which the logs were processed.

From version 7.0NG 712, Pandora FMS incorporates LogStash + ElasticSearch to store log information, which implies a substantial improvement in performance.

1.2 How it works

The process is simple:


  • The logs analysed by the agents (eventlog or text files) are forwarded to Pandora Server, literally as (RAW) within the XML reporting agent:
  • Pandora server (DataServer) receives the XML agent, which contains information about both monitoring and logs.
  • When DataServer processes the XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log , automatically sending information to LogStash in order to be storaged.
  • LogStash stores the information in Elasticsearch.
  • Finally, we can check the log information through the viewfinder in Pandora FMS console. The console will perform queries against the configured Elasticsearch server.

1.3 Configuration

1.3.1 Server Configuration

The new storage log system, which is based on ElasticSearch + LogStash requires configuring various components. Server Requirements

Each component (Pandora FMS Server, Elasticsearch, LogStash) can be distributed on separate servers.

If you choose to place Elasticsearch and LogStash on the same server we recommend:

  • At least 4GB of RAM
  • At least 2 CPU cores
  • At least 20GB of disk space for the system
  • At least 50GB of disk space for the mount point/var, mounted as LVM
  • Connectivity with the port 10516/TCP from Pandora server to LogStash and 9200/TCP from the Pandora console to Elasticsearch

If you have a machine that hosts historical database, this same one could be used to install Elasticsearch and LogStash. In that case, the minimum requirements of the machine should be adjusted to the number of data that we will process in both cases, the minimum being:

  • At least 4GB of RAM
  • At least 4 CPU cores
  • At least 20GB of disk space for the system
  • At least 50GB of disk space for the mount point/var, mounted as LVM Installing and configuring ElasticSearch

Before you begin installing these components, you must install Java on the machine:

yum install java

Once installed, install Elasticsearch from the downloadable RPM from the website of the Elasticsearch project : https://www.elastic.co/downloads/elasticsearch

Configure the service:

We will configure the network options and ‘’optionally’’ the data locations (and logs from Elasticsearch itself) in the configuration file located in /etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
# Set a custom port for HTTP:
http.port: 9200
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
#path.data: /var/lib/elastic
# Path to log files:
#path.logs: /var/log/elastic

It will be needed to adjust the options of the resources allocated to ElasticSearch, adjusting the parameters available in the configuration file located in /etc/elasticsearch/jvm.options

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

Start the service:

systemctl start elasticsearch

Note: If the service fails to start, check the logs located in /var/log/elasticsearch/

Note 2: If you are trying to install on Centos 6 against our recommendation, there is a problem with the latest versions of ElasticSearch (5. X) as they require extra kernel-level functionality that CentOS 6 does not offer. You can add the following lines to the configuration file yml to disable the use of bootstrap and avoid error.

bootstrap.system_call_filter: false
transport.host: localhost Installing and configuring LogStash

Install LogStash from the downloadable RPM from the project website Elasticsearch: https://www.elastic.co/downloads/logstash

Configuring the service

Within logstash configuration there are three configuration blocks:

  • Input: Indicates how you get the information to logstash, format, port, and an identifier that is used to store information internally in elastic.
  • Filter: You can add a post-processing here, but in our case is not necessary, so we'll leave it empty.
  • Output: Here comes the configuration of the IP and port where it will be listening Elasticsearch, this is the place where the information processed by logstash will be saved.

Configuration file:


Example of a configuration file:

# This input block will listen on port 10514 for logs to come in.
# host should be an IP on the Logstash server.
# codec => "json" indicates that we expect the lines we're receiving to be in JSON format
# type => "rsyslog" is an optional identifier to help identify messaging streams in the pipeline.
input {
 tcp {
    host  => ""
    port  => 10516
    codec => "json"
    type  => "pandora_remote_log_entry"
# This is an empty filter block.  You can later add other filters here to further process
# your log lines
filter { }
output {
  elasticsearch { hosts => ["localhost:9200"] }

Start the service:

systemctl start logstash

Note: If you are trying to install LogStash in Centos 6 despite our recommendation, you can start it with the following command:

initctl start logstash Configuration parameters in Pandora FMS Server

You will need to add the following configuration to Pandora FMS Server configuration file (/etc/pandora/pandora_server.conf) so that Pandora FMS DataServer processes the log information.

Important: Any log that reaches pandora without having this configuration active, will be discarded.

logstash_host eli.artica.lan
logstash_port 10516 Recommendations Log rotation for Elasticsearch and Logstash

Important: We recommend creating a new entry for daemon rotation logs in en /etc/logrotate.d, to prevent Elasticsearch or LogStash logs from growing without measure:

cat > /etc/logrotate.d/elastic <<EOF
/var/log/logstash/logstash-plain.log {
       size 300000
       rotate 3
       maxage 90
EOF Purging of Indexes

You can check at any time the list of indexes and size that occupy by launching a cURL petition against its ElasticSearch server:

curl -q http://elastic:9200/_cat/indices?

To remove any of these indexes you can execute the DELETE command:

curl -q -XDELETE http://elastic:9200/logstash-2017.09.06

This will free up the space used by the removed index.

1.3.2 Console Settings

To enable the system display of logs, you must enable the following configuration:

Activate logcollection.png

Then we can set the log viewer behaviour in the 'Log Collector' tab:

Log config consola.PNG

On this screen you can configure:

  • IP or FQDN address of the server that hosts the Elasticsearch service
  • Port through the one that is being given by the service Elasticsearch
  • Number of logs being shown. To speed up the response of the console, it has been added dynamic loading of records. To use this, the user must scroll to the bottom of the page, forcing the loading of the next set of available records. The size of these groups can be set in this field as the number of records per group.
  • Days to purge: To prevent the size of the system, you can define a maximum number of days in which the log information will be stored , from that date they will be automatically deleted in the process of cleaning Pandora FMS.

1.4 Migration to LogStash + Elasticsearch system

After setting the new storage system of logs, you can migrate all data previously stored in Pandora, as distributed in the new system directories.

To migrate to the new system, you must run the following script you can find in /usr/share/pandora_server/util/

# Migrate Log Data < 7.0NG 712 to >= 7.0NG 712
/usr/share/pandora_server/util/pandora_migrate_logs.pl /etc/pandora/pandora_server.conf

1.5 Display and Search

When we talk about a tool of log collection, we are mainly interested in two things: search for information, filtering by date and/or data sources, and to see that information drawn on occurrences per time unit. In this example, we are looking through data from all sources of all agents in the last hour:

Log viewer.PNG

There is a series of filters that can be used to display information:

  • Filter by message content: Search in the content of the message the indicated text.
  • Filter by log source (source id)
  • Agent Filter: Narrows the search results by those generated by the selected agent.
  • Filter per group: limits the selection of agents in the agent filter
  • Filter by date

Log viewer filtros.PNG

The most important and useful field for us will be the search string (search on the screenshot). This can be a simple text string, as in the previous case or a wilcard, in following example, a IP address:


Note: Searches should be done using complete words or beginning sub-strings of the search words. In example
Warning in somelongtext
Warning in some*

As shown on the screenshot below, it searches the date/time interval defined (the last hour) for any data source, any data that "looks like" the text beginning by the provided text:

Event log 3.png

If we tick the checkbox near the search field, we'd be able to see a context of the matching results:

Log filtered context.PNG

1.6 Configuring agents

The log collection is done by agents, both Windows and Unix agents (Linux, MacOsX, Solaris, HP-UX, AIX, BSD, etc). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.

Here are two examples to capture log information on windows and Unix:

1.6.1 Windows

module_name Eventlog_System
module_type log
module_source System
module_name PandoraAgent_log
module_type log
module_regexp C:\archivos de programa\pandora_agent\pandora_agent.log
module_description This module will return all lines from the specified logfile
module_pattern .*

In both cases, the only difference from monitoring module to the definition of a log source is:

module_type log 

This new syntax only understands the agent version 5.0, so you must update the agents if you want to use this new enterprise feature.

1.6.2 Unix Systems

In Unix a new plugin that comes with version 5.0 agent is used. Its syntax is simple:

module_plugin grep_log_module /var/log/messages Syslog \.\*

Similar to plug parsing logs (grep_log) grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the regular expression \.\* (In this case "all") as the pattern when choosing which lines we will send and which ones we won’t.

Go back to Pandora FMS documentation index